If you think a typical EA cause has very high impact, it seems quite plausible that you can have even higher impact by working one level of “meta” up -- working not on that cause directly, but instead working on getting more people to work on that cause.
For example, while the impact of a donation to the Against Malaria Foundationseems quite large, it should be even more impactful to donate to Charity Science, Giving What We Can, The Life You Can Save, or Raising for Effective Giving, all of which claim to be able to move many dollars to AMF for every dollar donated to them. Likewise, if you think an individual EA is quite valuable because of the impact they’ll have, you may want to invest your time and money not in having a direct impact, but in producing more EAs!
However, while I agree with this logic, I’m nervous about going too far. As Dylan Matthews says, “if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people”. This is what leads to what Matthews called “[d]oing good through aggressive self-promotion” or what I’m calling “the meta trap”. While some meta-projects may have the highest impact in expectation, there are higher-order reasons to want to avoid giving all your resources to meta orgs.
Meta Trap #1: Meta Orgs Risk Not Actually Having an Impact
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable? (1) Linda is a bank teller or (2) Linda is a bank teller and is active in the feminist movement.
When asked by Tversky and Khaneman, the majority of people picked #2. However, this isn’t possible, since the probability of two events both occurring cannot be greater than the probability of one of those two events occurring.
This is called the conjuction fallacy, and it is a classic bias of human rationality. However, it’s also a classic bias of meta-charity.
When you chain different probabilities together, every additional step in the chain will, in almost every case, weaken it. This is also true with chaining together steps of meta-charity together -- while you’re getting higher returns in expected value, you’re also reducing the chance the impact will actually occur.
Consider someone who is considering donating to fund the salary of a staff member to work full-time finding volunteer mentors to advise new EA chapters at various universities. These EA chapters will in turn bring more college students into EA, and these new EAs will then graduate and will all earn-to-give for GiveWell top charities. (While a bit silly-sounding, this plan is so realistic in EA, I’ve actually funded a form of it.)
This plan could have quite a high impact. While donating to AMF all our lives is great, if we can spend our effort to get two people to donate to AMF instead of us, we’ve doubled our impact. If we can spend our effort creating an entire college group to get dozens of people to donate AMF, so much more impact! And then we can expand an entire network of college groups! And then we can become even more efficient in expanding this network. So much impact!
However, we’ve also now constructed a meta-chain that is five steps removed from the actual impact. There’s a lot that can now go wrong in this chain -- the chapters could get set up successfully but fail to get enough people to donate, the chapters could fail to get set up at all due to problems unrelated to the mentoring, the mentors themselves could fail to be better than if the full-time staff member just advised full-time instead, and the staff member could end up being really bad at recruiting volunteer mentors.
This doesn’t mean the chapter chain doesn’t have high expected value or that it’s not worth doing. It just means that it’s risky, and I’m nervous that as the levels of meta scale up, the additional risk taken on by introducing ways to break the chain might be much greater than the additional leverage taken by introducing another meta step.
I do think meta-charities are worth pursuing and I fund them myself. But for every time I think about how good of an opportunity the connections facilitated at EA Global are, I also worry about whether the new EAs brought into the movement really are going to create more counterfactual impact than the considerable cost of the conference.
Meta Trap #2: Meta Orgs Risk Curling In On Themselves
When I was in college, I once joked about a fictitious club called “Club Club” with the only purpose of perpetuating the club. Every Club Club meeting would be about how to advertise Club Club, how to recruit more Club Club members, and how to better retain the members that Club Club already had. Club Club wouldn’t actually do anything. On days where I’m especially grumpy, I worry that EA may become that.
The problem is that if we spread the idea that meta-orgs are the highest impact opportunity too well, we risk the creation of a meta-movement to spread the meta-movement and nothing else. Once meta-orgs get to the point where it’s all about EAs helping other EAs to help EAs, we’ve gotten to the point where there’s serious risk that actual impact won’t occur.
Consider that plan again where we get someone to full-time find chapter advisors for setting up lots of EA chapters. Now imagine that instead of advocating to the college students that they earn to give for GiveWell charities we suggest that this chapter building project is really the best possible thing to be doing, so they should get involved in, donate to, and volunteer for it. Now we’ve got chapters developing chapters to perpetuate developing more chapters. But what does this actually accomplish? We might as well have them working to set up Club Club.
Meta Trap #3: You Can’t Use Meta as an Excuse for Cause Indecisiveness
EA is made up of quite a few different object-level causes and it can be hard to figure out which one is the best. Is it global poverty? Existential risk reduction? Animal welfare? Or something else?
Somehow, meta-work became it’s own cause in this list, but I think that’s a mistake. Meta-work isn’t a cause, it’s a meta-cause, and it’s supposed to make the above actual-causes go better. To understand the meta-impact that meta-work has, it’s important to understand the object-level causes and have opinions on which one is best.
However, I feel like far too often people (including myself) hide behind donations to meta-charity as a feel-good way to support the EA movement as a whole without doing the hard work of figuring out which object-level causes are the best. Unless you’re funding cause prioritization research or hoping to bring in EAs who will shed more light on the question of which cause is the best, this seems like a big risk. It avoids learning opportunities and discussions we could be having about what the best causes actually are, which also pushes the entire movement forward.
Meta Trap #4: At Some Point, You Have to Stop Being Meta
Abraham Lincoln is purported to have said “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” I think this is generally a good philosophy to follow. But at some point you have to swing the axe and actually chop down the tree. If you have six hours to chop down the tree and you spend all six hours sharpening the axe, you’re obviously doing it wrong.
The problem we face with meta tasks is that we don’t really know how much time we have, and we have to allocate this unknown amount of time to axe sharpening (meta-work) and tree chopping (actual impact). At what point should we start chopping? I’m nervous that we may get so carried away with meta-work we forget to actually chop at some point.
Meta Trap #5: Sometimes, Well Executed Object-Level Action is What Best Grows the Movement
GiveWell is considered a meta-org, but they focus on direct research about which cause is best. Historically, they have not focused much resources on outreach or marketing. Instead, they just focused on doing a very good job on their research and delivering high-quality recommendations. In turn, they attracted many donors, including a big foundation. As GiveWell says, “Much of our most valuable publicity and promotion has come from enthusiastic people who actively sought us out” and that they “have generally felt that improving the quality of [their] research, and [the] existing audience’s understanding of it, has been the most important factor in [their] growth”.
Perhaps counter-intuitively, doing really well on object-level stuff could also be one of the best things we can do to grow a quality movement. People aren’t attracted to marketing, they're attracted to people doing a good job. Marketing is only useful in so far as it draws attention to good work.
How Can We Defuse The Meta Trap?
To be clear, I don’t think the EA movement is in a meta trap yet. I think we’re doing good work and making a lot of progress on important, object-level issues. But I want to be careful. I think the meta trap is a real problem.
Here are two steps I think would work to defuse it:
1.) The more steps away from impact an EA plan is, the more additional scrutiny it should get. The idea of a meta-meta-org may sound unusual, but many EA plans are actually that. This doesn’t mean they’re wrongheaded -- I just think they warrant extra skepticism. Are we really getting extra impact from each step? Or are we just introducing a lot of risk by bringing in another chain that might collapse the whole thing?
2.) More EAs should have a substantial stake in object-level impact. Right now I’m aiming at donating 50% of my pool to the best meta-projects I know and spending the other 50% on direct impact through GiveWell’s top charities. I don’t know if 50% is the correct number, but I hope this will set an example of what I want the movement as a whole to do.
I think Jeff Kaufman at EA Global put it best:
So yes, we should do this, we should put substantial effort into growing the movement. But this isn't the only thing we should do. We can't have an entirely meta movement that goes grow, grow, grow, build growth capacity, bring in people to bring in people, bigger and bigger, and then shift focus? Turn your giant optimized-for-growth movement into an optimized-for-helping one? Not going to work.
We need to do things that help people alongside growing the movement, and personally I try to divide my efforts 50-50. As I argued above, for the doing-good-now portion I think global poverty is our best shot. This isn't settled—EA is all about being open to the best options for helping others, whatever those causes happen to be—but today I think the best you can do to help people now is donate to GiveWell's top charities.“
I'm more positive to meta-level work and don't really recognize the risk that EA will fall into a meta-trap. That said, this is an interesting post on an important matter.
Generally speaking, I would guess that humans have a bias against meta-level work, and in favour of object-level work - not the least since the latter usually have more visible and easily understandable effects. They don't spend enough time on various indirect ways of improving our productivitiy, but prefer to just "start getting things done". It was not the least due to this bias that it took such long time before we saw any substantial progress, I'd say.
After the Scientific Revolution, we have seen very substantial progress. We've also seen an enormous explosion in the amount of "meta-work". Whereas people before mostly had object-level jobs (e.g. food production), nowaydays most people work on meta-level work (e.g. improvement of food production, improvement of the cognitive skills of those working on food production, etc).
Hence more people doing meta-work has historically been correlated with progress. Also, note that this development towards more meta-work has always been derided by populists who have claimed that meta-level work isn't "real work". To my mind, that is the real trap, which it's very important not to fall into.
I think there is a certain element of that kind of populism/bias to some of the criticisms of meta-level work. For instance, of course you have to stop working on movement growth at some point. That's not the question. The question is how to prioritize between movement growth and object-level work right now. And here I think that the haste consideration should be heavily weighted.
The most interesting point is the last one. Precisely because people are biased in favour of object-level work, object-level work can be very useful in recruiting. You can do an elevator pitch about earning to give in a way you can't about meta-level work whose rationale hinges on complex chains of reasoning. I think that the marginal value of this effect is rapidly diminishing, though - once you have a few examples of object-level work you can point to, additional examples won't help movement growth that much.
Also I'm not clear over how wide your definition of "meta-level work" is. Sometimes it seems like you're mostly talking about work on movement growth, but organizations like CFAR who don't work on movement growth are also counted as meta-organizations.
Finally, let me add that we should remember that it's not always clear how to distinguish between meta-level and object-level work.
I think the synthesis of your and Peter's points is, whether object-level or meta- work, how and how well we do something can be just as important as what we're trying to do in the first place. This distinction and consideration might be neglected by effective altruists when they're trying to figure out how to maximize impact.
Stefan (or, Dr. Schubert? You have a Ph.D., and I don't know what's the most appropriate way to address you in this setting), I commented that I think the current definition effective altruism uses for meta-level work is not granular enough, and this likely causes significant problems in our decision-making. I broke this down into "cause prioritization" and "movement growth". However, you make me realize there needs to be more distinctions than that. Here is my list for dinstinguishing meta-level work in effective altruism.
Cause Prioritization: Givewell; Open Philanthropy Project; Global Priorities Project
Charity Prioritization: Givewell; Animal Charity Evaluators; Future of Life Institute
Foundational Research: Foundational Research Institute; Future of Humanity Institute; Leverage Research; Center For Applied Rationality
Community Growth, Development, and Management: Center For Applied Rationality; Giving What We Can; Raising for Effective Giving; The Life You Can Save; Effective Altruism Outreach; Effective Altruism Ventures; Effective Altruism Action; 80,000 Hours
Fundraising and Advocacy: Charity Science; Raising for Effective Giving; The Life You Can Save; Giving What We Can; Effective Altruism Ventures; Centre for Effective Altruism
Original/Applied Research: Center For Applied Rationality; Charity Science; 80,000 Hours
Community Support: CFAR; 80k; CEA; .impact
If you look at all this diversity and how putting all this in one cause makes for strange bedfellows across organizations, I concur with Peter meta-work, while important, isn't a cause in its own right, and it makes more sense to think of different meta-charities just filling different roles within effective altruism. Some meta-charities are specific to an object-level cause or two, and some are not.
 Most meta-charities seem to occupy more than one of these charities.
 Prioritizing between charities within a single cause, rather than prioritizing between causes.
 Research not specific to a single cause, but aimed at improving decision-making, cause prioritization, and cause selection as processes themselves. "Foundational research" is considered reseach not otherwise completed by the scientific community, and is more or less constrained to effective altruism. It can be considered original research in the physical sciences, social sciences, or philosophy.
 While not its primary focus, through their workshops CFAR has built an alumni network numbering in the hundreds and constantly growing, the full potential for which both within and outside of effective altruism seems yet untapped. Thus, I consider this an important passive role CFAR plays.
 Not the same as "foundational research". "Original research" constitutes applying existing research methods from other science or scholarship to new topics, such as Charity Science doing fundraising experiments, or 80,000 Hours doing careers impact reseach, which apply methods from social science in new ways.
 Organizations which play a role in facilitating or improving the effectiveness of individual effective altruists or other organizations.
Evan, (please say Stefan): nice work! I agree the "meta-level" category is too coarse-grained. At the same time, I also think there's a risk of becoming too fine-grained. You could have several different level of granularity, but the uppermost one should be relatively coarse-grained I think (perhaps 5-7 categories, but I'm not sure of the exact numbers), since otherwise it gets too hard to remember. Could one turn your six categories into four - movement growth (increased quantity of members), community support (improved productivity of individual members), prioritization and research? They would in turn have different sub-categories. Not quite sure of this proposal though.
It'd be great if you wrote something more systematic on this. Do include a comprehensive discussion of the reasons for your categorization, though. Criteria could include:
1) The categories should be intuitively clear. You don't want a single category encompassing what is intuitively very different things. 2) Not too many categories at the most coarse-grained level. 3) Each category must denote something sufficiently important. If, e.g. few people are working on a cause, it's better to define it as a sub-category of another cause area.
Yeah, those four categories work better. I'll just do that in the future. I'll also cite your criteria. I have notes corresponding to a more systematic breakdown of this, basically trying to cover all of effective altruism. The other foci I include are causes, and are:
i) Globay Poverty Reduction and Global Health: this includes charities recommended by Givewell and The Life You Can Save.
ii) Animal Stewardship: this includes charities recommended by Animal Charity Evaluations, and additionally Direct Action Everywhere and Faunalytics (organizations which identify with effectiveness as a criterion for doing good, but don't explicitly identify with effective altruism as a movement).
iii) Globally Catastrophic and Existential Risk Reduction: includes MIRI, CSER, FHI, etc.
iv) Policy and Economics: would include the Open Philanthropy Project, your Evidence-Based Policy Project, the Center for Global Development (which has received multiple grants for Good Ventures), EA Policy Analytics, GCRI, GPP, and the Open Borders Action Group.
v) Political and Systemic Change: a category like advocacy, but it looks very different from what EA orgs do. Direct Action Everywhere and Open Borders Action Group are two examples, who raise awareness in very public ways, through, e.g., protest action, more so than effective altruism organizations like Charity Science or Raising for Effective Giving who raise funds and advocate at only semi-public events or on a smaller-scale. These organizations are adjacent to effective altruism but not so core, as there has been dispute about whether the issues they're tackling and how they're tackling them constitute a tractable way of doing things. I may drop this category.
I'm thinking of keeping the division between "Foundational Research" and maybe "Scientific Research", though. While new, improving scientific research is something Open Phil has a good chance of funding in the future, as well as New Harvest and other organizations doing work in the natural sciences entering effective altruism more. I think that constitutes a category in itself. This laboratory work, or whether industrial or academic, likely doesn't have much in common with, e.g., FHI, Leverage Research and the Foundational Research Institute, which mostly do philosophy. Those three organizations seem to have much more common with each other.
However, "scientific research" might be an object-level cause. It's the one cause where it's very difficult to tell how we ought to think of it. For example, is developing a vaccine for ebola an object-level project, or is it a meta-project, with distributing the vaccine being the only true objective of the program? I don't know.
This is a very good start, and I think you're capable of doing this well. I'm thinking, though, that policy-work can be one method of reducing global poverty, whereas giving to charity is another one. So perhaps you want two dimensions - one for methods, and one for objectives.
Or alternatively, you might want to distinguish problem-oriented organizations, which use any method to solve a given problem (say X-risk) from method-oriented organizations which perfect a given method (say influencing the government) which they apply to lots of areas.
You seem to have a very comprehensive grasp of the different EA organizations, which is obviously very useful when you do this categorization work.
I definitely think this is true of humans generally, but I don't think this is true of EAs at all. EAs I see are generally way more excited about meta work than GiveWell top charities.
That's an interesting argument from history that I hadn't considered. Thanks.
I agree that it's a risk of people generally, but I very much doubt it would happen in EA. Right now I think the balance is clearly toward there being too much meta work than too little. But I guess I could be wrong about that.
Right. But I don't think much of the haste consideration, for the reasons I mention in this post.
I would include CFAR in my list of meta-orgs and I do think CFAR risks falling into a meta trap. I've been critical of CFAR's argument for impact in the past.
What do you mean? I agree it is unclear in some respects. For example, as Eric Herboso pointed out, cause prioritization is meta-level work but should probably be treated as object-level and doesn't face as many meta traps. Likewise, AMF is technically a meta-org in so far as they partner with actual on-the-ground orgs, but they should be considered object-level for this analysis.
I can't think of any orgs people think of as object-level that are more accurately characterized as meta-level.
I think it's true of EAs as well, although to a somewhat smaller extent. But I think we should try to get more data on this issue in order to solve it (see my other comment ).
Good. These are the kind of things I mean. I just mean that we should keep in mind that the distinction isn't sharp.
Keep in mind that there are two senses of the word "meta" that I see used often; Peter is speaking specifically about "working not on [a] cause directly, but instead working on getting more people to work on that cause."
The other sense of "meta", where you're working not on the cause, but instead on figuring out the best interventions for that cause, is not what Peter is talking about here.
While the second sense of "meta" also might be a trap of sorts, since you could conceivably spend all your time/money doing meta-studies and never actually helping individuals, I don't think we're anywhere near that point for any EA causes. Even the most well-researched interventions deserve continued evaluation (such as evaluating the recent Cochrane review on deworming) and, in some cases, the research still requires a lot of work.
Reducing animal suffering is a prime example of this. Helping direct animal charities is important, but I believe it is far more important to continue working on research instead. Consider that in the field of animal welfare most intervention types have yet to be evaluated, and even the most highly regarded interventions come with caveats like “in the absence of strong reasons to believe the effects are negative, we expect the effects to be positive on balance” on corporate outreach, or the even more extreme “no difference found in the total change in consumption of animal products between the two groups”, after which the leafleting intervention is nevertheless recommended. (This is not to say that these interventions are poor; to the contrary, they're the best that have so far been found by ACE. I just think more money should go toward research to either find better interventions or better understand the current top interventions.)
So while Peter might be correct when it comes to meta-work in the sense of recruiting, I don't believe it would be correct in the sense of research.
I agree that research on effective ways to alleviate animal suffering is especially underfunded, and is plausibly a higher priority than direct work in this particular field, given the lack of knowledge about which types of direct work help.
I agree. There is a new initiative to fund and coordinate research on interventions for farm animals. We just posted details at: https://www.facebook.com/groups/EffectiveAnimalActivism/permalink/483367015167508/
Great stuff, Eric. Your input seems as valuable to consider as the OP itself is. I agree.
If you replace [animal] with [GCR reduction] or [x-risk reduction] in this sentence, I suspect the same is true for that cause, though I'm not personally enmeshed in the field enough to provide as good examples as you have for animal charity. This is why I currently favor increased research output from the Global Catastrophic Risks Institute, the Open Philanthropy Project, and/or the Future of LIfe Institute rather than, say, the Machine Intelligence Research Institute. I really need to look into this more, though, as you have for animal charities.
MIRI primarily does research too though. Do you mean you prefer to support cause prioritization research?
Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI's research agenda.
For the record, I do agree with this.
For the record, Peter, when you were in Vancouver and I told you I thought the best thing for effective altruism was to reduce uncertainty between or within causes, and you suggested I donate to Mercy For Animals to help them recoup the costs of and complete their research for intervention experiments, I agree with you. I haven't donate, and will not likely donate in the near future, to this cause, though, because I am currently broke :(
I need to remedy this by applying for better jobs and spending less time on the EA Forum. This is so exciting it's hard for me to stop, though.
No worries. We were able to close our funding gap.
I thoroughly endorse this message, and say that as someone who does meta work full time (some of it as few steps from direct impact as possible, like directly fundraising for GiveWell charities in the here and now, but some of it several risky steps away, like the exact EA chapter push that Peter refers to). In particular, I endorse the suggestion of always keeping up some level of donations to the direct charities that you ultimately care about. (And, with characteristic nobility, I'm saying that against my own interest, as it decreases my chance of being able to hire people/eat/do other good stuff.)
You already know that I think this from my comments on your initial draft, but to repeat some of them here:
On our EA chapter push specifically, it's very reasonable to be nervous about some of the steps in the chain, and I am too.
I agree that "some meta-projects may have the highest impact in expectation", but think that applies to only a few carefully chosen ones. It's not a case of "the meta the better", and there's not unlimited room for worthwhile meta-projects to keep expanding.
I think it's still too strong to say that "to understand the meta-impact that meta-work has, it’s important to have opinions on which [object-level cause] is best" (my emphasis).
Anything which works towards building the EA movement or recruiting self-identified EAs who plan to do good for the rest of their lives (even if it's certain to succeed in these aims) inherits many failure points and speculative steps in its pathway to ultimate impact. I still think this work is sometimes worthwhile, but it's important to be conscious of this weakness.
I think you mean "the more meta the better".
It works as a pun in nonrhotic (e.g., some British) accents.
Can confirm. When I try it with a British accent, I notice my voice sounds faked and pretentious. When I just remove the "r" from the end of "better" while using my normal accent I sound like any character of the Martin Scorsese film The Departed. This is fun. I might talk like this all the time now.
US east coast (Boston etc) is also traditionally nonrhotic.
Hence Evan's example of The Departed, which is my model of what Bostonians sound like. ;)
Awesome post. Wholeheartedly agree.
One thing worth pointing out is that doing non-meta work is actually quite hard. For instance, Harvard Effective Altruism has really struggled sometimes to find things to do other than organizing talks and outreach events. In a somewhat similar vein, it's a lot easier for most people to earn money to fund eg global poverty causes, than to work directly on global poverty causes in an effective way.
Which is I think why actually doing some cool stuff would be attractive for people / demostrate that we aren't just a cultish group of people that tithe/give in a slightly different way to several billion people doing it already.
I think Peter was primarily referring to launching and funding whole projects and/or organizations doing meta-work, not earning to give to an object-level cause as an individual.
You make a good point that going up another meta level adds another layer at which you can fail to have a positive impact. This is a really important problem with meta that I hadn't thought of.
I don't think we should be too concerned about the "Club Club" problem. Right now, meta-orgs like Charity Science are good donations targets precisely because they efficiently direct lots of donations to good object-level charities. If this stops being the case, Charity Science will no longer have the same appeal, and they can't make the same clear-cut arguments for their effectiveness.
That's true, but most Charity Science fundraising experiments are unusual in that their success or failure is relatively easy to determine (by design rather than by accident). We can generally look at direct, immediate, counterfactually-adjusted money moved and check them against the prespecified criteria for success or failure. That's (unavoidably) harder to do for many other meta activities.
So then my argument only applies to people who prefer to donate to meta-charities that have demonstrable evidence that they actually direct donations to effective object-level causes. This is what I do but I may have been overestimating how many people operate this way.
Sounds like you're mainly interested in projects only one meta-level above. But as the number of projects with 2+ meta-levels increases, this may get harder.
I agree with all this Peter.
One problem I think especially worth highlighting is not exactly the added "risk" of added meta-layers, but over-determination: i.e. you have a lot of people, orgs and even backgrounds news articles and culture floating around, all persuading people to get involved in EA, it's very hard to know how much any of them are contributing.
Another way of thinking about this is that in an overdetermined environment it seems like there would be a point at which the impact of EA movement building will be "causing a person to join EA sooner" instead of "adding another person to EA" (which is the current basis for evaluating EA movement building impact), which would be much less valuable.
From my point of view, I can't tell that EA wont be a distraction from already altruist and effective people in all cases, especially now as there are more people than direct-enough projects.
One big thing that's missing from this post is where does the balance lie today? Perhaps when talking to EAs it seems like there's more excited about meta stuff (edit: and you'd expect a strong selection effect here - the people who run movement building activities will be the most keen on it), but if you look at the balance of where the money is going, it's heavily weighted towards object level activities.
In 2014, about $40m of funding was allocated on the basis of GiveWell's recommendations, whereas only a couple of million dollars was spent on meta activities. http://www.givewell.org/about/impact
These worries are real worries to look out for, but when less than 5% of resources are going into meta, it's difficult to think they're an issue today.
You approvingly cite Jeff's 50:50 recommendation, which would actually involve a huge ramping up of meta activities from today.
This is a fair criticism and something I did think a good deal about. However, my point was not to advise the world as a whole, but to advise the readers of the EA forum. My understanding is that the typical reader of the EA forum is a lot more predisposed to meta-orgs.
Remember that I have to say something that is true, insightful, and actionable. Even if it is true that meta-orgs are underfunded in the global community, there's little insight or actionable advice there. Getting people to see potential pitfalls of meta-orgs that I see very little discussion about is more insightful and actionable for the readers here.
Ok, we get into a similar problem with earning to give. For new people, I want to encourage them to etg, but for people already heavily involved in the community, I want them to probably do it less.
If this is how you feel though, I think you could have been a lot clearer.
e.g. "I'm not sure if we need more or less meta, but here's some arguments against meta that I don't see raised often enough"
or "The EA movement as a whole probably needs more meta, but it seems like too many of the most involved EAs are focused on it"
What sort of feedback signals would we get if EA was currently falling into a meta-trap? What is the current state of those signals?
Two ideas might be:
1.) Amount of money going to meta-stuff vs. amount of money going to object-level stuff. What is the total budget of CEA + GWWC + 80K + Charity Science + TLYCS + ACE + ...? What is the total amount of money being moved to GiveWell top charities + MIRI + ACE top charities + ...?
2.) Average meta-level of meta-stuff. Is a typical project one level above (e.g., Charity Science fundraising for GiveWell top charities) or 2+ levels above (e.g., CEA funding a team to assess the EA movement for gaps in meta-orgs)? What is the level weighted by total funding per project?
We can measure this. Measuring (1) seems like it be easier than measuring (2). For (1), we just need to check the funds meta-charities have received, vs. the proportion of the funds direct work receives that is from effective altruism. Measuring (2) seems like it would require contacting and conversing with the executivies of meta-charities. That's easy enough on it's own, except they're probably busy, and it would take them time to figure out exactly how meta each of their projects are, time they might rather spend running their organization. It'd be easier to get them to respond if we informed meta-charities that, e.g., providing this information would be incorporated into a global transparency report on meta-charity on the EA Forum, that we expect being included in the report would boost the meta-charity's prospects, and that their conspicuous absence or lack of information provided might just raise more questions or skepticism of them in the future.
Anyway, I'd be willing to do all this to gauge if effective altruism is indeed falling into a "meta trap". Let me know what you think.
I don't understand what this means. Can you put it another way? Feel free to use real or hypothetical examples.
For example, imagine there are only two projects in the entire EA movement. Christie's Chapters is an EA group that supports volunteers in creating local chapters, which is three levels removed from direct impact. Boris's Bednets is a fundraising org that raises money for AMF, which is one level removed from impact. The average meta level is thus (3 + 1)/2 = 2.
Another way to look at it is to weigh by funding. If Christie's Chapters has $1M but Boris's Bednets has $10M, we should weigh Boris's Bednets much more highly since more resources are going to it. We thus apply a weighted average (3$1M + 1$10M) / $11M = 1.18.
This makes lots of sense, thanks. I can now carry on with the project of checking all this money moved, including with weighted averages.
Was this selection inspired by Boris Yakubchik?
lots of excitement, little in the way of new or surprising successes.
What data do we have on this question? It's hard to assess Peter's hypothesis that EA risks falling into a meta-trap without knowing, e.g. roughly how many EAs work on meta- and object causes, to what extent more work on movement growth has actually led to more movement growth, etc. Is such data being collected?
We could calculate these numbers and look to the EA survey for other things like career choice and cause choice.
Funny sidenote: my friend Max from high school started a real club, approved by the high school by way of getting enough members at the beginning of the year and whatnot, that was functionally the same as Club Club. It was called the "Sandwich Club", and it was a club for the appreciation of sandwiches, and its goal was to attract as many members as possible such that their membership implied they were great fans of sandwiches. This was the only purpose of the club, and it never did anything, and it never had any meetings, and never collected fees from its members for any events. If I recally correctly, it ran for two years in a row. That's stupendous, considering the whole thing was a joke, and my friend Max saw fit to pull the same shenanigan twice.
What we need is a balance between building the movement and actually working on causes. I don’t have an answer for this split, but I have been concerned lately that the EA movement currently seems to be focussing mostly on movement building and cause prioritisation, rather than actual cause work. I like to think about ‘quick wins’ that the EA movement can achieve, and tackle these in the short term. Achieving quick wins may also attract more people to the movement as they see the immediate effectiveness of what we do. In addition to movement building and earning to give I am involved in researching and writing an article calling for a government funded public health campaign to advertise against meat consumption.
The rationale is that this would improve the health of individuals, reduce the public burden on health care freeing up government dollars to do something else (hopefully a good something) and to reduce animal suffering. It’s a slightly risky prospect as it may not result in action, but I’m confident that it will educate people about the health side of meat consumption in the worst case scenario. I think that as a movement we need to team up to focus on these quick wins a little more.
I don't think that's true. Of the most active EAs, you'll find they're enthused about movement building, but there's a strong selection effect there (the people who think movement building is good are the people you find talking about it!). The large majority of resources, however, are being donated to GiveWell recommended charities, or contained within Open Phil.
Edit: this essay is great and I'm excited and I wanted to build on what Peter wrote so much I didn't even finish reading the whole thing before I've started in great volume commenting on each individual point. I believe I've gone over something Peter already covered in the OP, before I realized it. I'll edit that out for brevity, but forgive me if I miss something and I'm just needlessly repeating Peter.
[Epistemic Status: heady, giddy and rapid hypothesis generation]
I perceive two pitfalls here. Firstly, logistics and administration may become more difficult as the meta-project grows. For however many levels n an organization goes meta, such that n is the number of steps removed the management of the project in question is from the object-level goals of effective altruism, there is going to be more people, information, and organizations to keep up with. As a project gets more meta, it will become more difficult to convince effective altruists in general to increase its funding so it can scale, or become even more meta. So, a project that from the start seeks to go more meta in an unrestrained way will also face constant talent and financial constraints. As it go more meta, it will be difficult to find fitting employees, and due to how abstract the project would become, it would become difficult to explain in the first place how the project would work, let alone how skeptical said funders would otherwise be. If a project receives all their funding from one major donor, or a single consortium of donors, the project managers risk losing the independence to run their project as they see fit without being held back by constant questions or donors/directors steering the project in a new direction. This is why, e.g., Givewell doesn't want to receive all its funding from Good Ventures.
To run a meta-project in a nimble way which reacts quickly to changing circumstances and steep learning curves seems necessary for running meta-projects, as they're almost always breaking new ground in the non-profit sector and/or effective altruism when they're founded. So, they can't risk losing their independence by courting only one donor who may go on seekign to steer the ship themselves. If the meta-project in question was constrained to effective altruism, and it was otherwise facing financial constraints, I'd be skeptical of any claim they could find sufficient funding by going outside the effective altruism community.
Second, there is a temptation for meta-projects to go to the nth meta-level indefinitely. If they do so, I figure they'd eventually reach the point where the network they've built for expanding effective altruism would become unmanageable, and the members of that network wouldn't coordinate or gain the self-awareness to know what to do with themselves. So, the whole thing would unwind. While not all the value of the meta-project would be undone in such a case, I think there would be sufficient collapse or loss that whatever initial costs to start and ongoing costs to maintain the project would be unjustified, and counterfactully would have done more good at some lower level of organization. Whether that level would be the object-level, e.g., just donating to AMF, or only one meta-level up, merely fundraising for AMF, there would've been a point the managers should've known to stop the constant abstaction of the project.
I think the solution to both these problems is, would be, and will need to be greater accountability and oversight. Major donors to EA meta-projects might want to see a laid-out plan for operational goals for the next year, a budgetary breakdown of anticipated funding necessary for the goals, and a detailed layout of how they've done such in the past, to demonstrate their track record of reliability. Charity Science does all that, right? This is a cross between the proposal to fix science by registersing the hypothesis before the study is conducted, and a compnay transparently providing information to assure its investors the company's executives are making the best choices and best they can. If so, I think every other EA meta-project or meta-charity should be expected to do the same. I'd be happy to help normalize this trend. The best way to do that would be to explain how I donated to, e.g., Charity Science instead of CEA or Givewell or something based on Charity Science making abundantly clear their operational goals and what they expected to achieve relative to other organizations. I don't have enough money to donate right now to justify that, and likely won't in the next year, so I can't do that. I encourage others, such as yourself, Peter, to do that more often. I'd lend my moral, vocal, or other support in the present, though.
Also, I figure meta-projects or meta-charities should be incentivized, in addition to the above, to preregister what their low-ball, average, and stretch goals are for the year, with as quantified a caliber of confidence as they can muster. Incenitivizing this could be faciliated by an EA prediction market or other mechanisms of moral economics that have been recently discussed on this forum. A prediction maket could work in calibrating the expectations of a project by way of the best forecasters in the market making their own predictions of the projected success of the meta-project. If all the best forecasters, with their proven track records, independently converged on the conclusion the scope of the project was overconfident, the project managers would be induced to temper their overconfidence, and, e.g., ask for less fudnign than they can optimally use. To get an organization to change their behavior in the face of such a prediction-market scenario, I figure they might need be incentivized with rewards for updating in the right direction. I can't think of any right now besides assurance they'd receive the appropriate level of funding (corresponding to their most realistic goals).
Finally, it seems to me how internally well-connected the existing network that is effective altruism is just as important for facilitating an increase in valuable object-level work as growing the movement. I call the former, increasing the value of internal networking, growing stonger, and the latter, growing the network as a whole, I call growing bigger. This is a distinction of the ways effective altruists use the phrase "movement growth" in different ways. This distinction was first made clear to me at the 2013 EA Summit. "Growing stronger" seemed the approach to movement development favored by Anna Salamon and CFAR, "growing stronger" the approached favored by the CEA, and a combination of both a strategy seemingly favored by Geoff Anders and Leverage Research. I think managing and improving the internal strength of the community as is, and how we connect and collaborate, is just as or more important than increasing the absolute size of effective altruism. Another way of thinking of this is: increasing absolute impact vs. increasing impact per unit effort expended. My recent spate of proposals to and engagement with .impact has been motivated by facilitating movement development via increasing the utility of the current network.
I agree. Meta-projects are inherently difficult to evaluate, but I do think we're not spending nearly as much time or money on such things as we could.
Also, thanks for all your feedback, Evan. I'm glad you liked it.
Great post, Peter.
You helped change my perspective from my post yesterday.
I hadn't considered your point:
It makes sense. For example, set an example for others as someone who thoughtfully and altruistically donates a significant portion of their income to charity and many of the others may follow.
This is a very important point. While I think it's quite possible to identify effective meta-level work that avoids your Meta Traps #1-#4, I think it's probably harder than most people (including myself) would initially think, due to many initial ideas falling into one or more of the meta traps.
(Typo: "People aren’t attracted to marketing, their attracted to people doing a good job." -- should be "they're")
The example you use in point 5 could be made stronger. How about the creation of AMF? That arguably did a huge amount to boost the EA movement by giving us a clear success story to rally around.
Using the example of GiveWell not focusing on marketing involves a odd definition of meta. Meta normally includes both cause pri research and the promotion of that research so people act on it, but you suddenly retreat to meta being only defined as marketing. GiveWell is a meta org if anyone is.
That's a much better example. Thanks!
That's a fair criticism. I definitely agree that cause prioritization is meta- and that GiveWell is a meta-org.
On the other hand, I think cause prioritization is exempt from many of the same meta-traps as other meta orgs since the case for impact is quite clear (re: 1) assuming people will use the research which is usually easily established, it's usually only one level above the object-oriented stuff (re: 2), it directly addresses 3 by providing decision-relevant research, and good research attracts respect (re: 5). The biggest problem I see for cause prioritization is 4 (that, eventually, you have to actually act on the research).
Marketing of good object level causes (e.g. what GWWC does) is also only one level above object-oriented stuff, and has quite clear impact (you told people about something good and they did it).
Though I agree persuading people of effective altruism, with the hope they later do good object level stuff is another level removed. Is that the main form of meta you're concerned by? Until recently, there has been very little resources directly invested in that kind of activity, so it seems like we have a way to go before being in a meta trap. The key is to keep watching the metrics..
Yep; I agree.
You mention that far meta concerns with high expected value deserve lots of scrutiny, and this seems correct. I guess that you could use a multi-level model to penalize the most meta of concerns, and calculate new expected values for different things that you might fund, but maybe even that wouldn't be sufficient.
It seems like funding a given meta activity on the margin should be given less consideration (i.e. your calculated expected value for funding that thing should be further revised downwards) if x % of charitable funds being spent by EA's are already going to meta causes, and more consideration if only e.g. 0.5 * x % of charitable funds being spent by EA's are already going to meta causes. This makes since because of reputational effects-- it looks weird to new EA's if too much is being spent on meta projects.
The meta-charity I currently favor is cause prioritization, but I don't even know how to best achieve that. I'm prone to the same mistake in supporting meta-charity as Peter is. Thus, what I actually favor over giving to a meta-charity in the face of object-level indecision is putting the money in a donor-advised fund, or investing it through the Giving What We Can Trust. I don't know what object-level cause I support. However, in supporting a meta-charity, I'd also neglect or procrastinate to figure out what object-level cause I prioritize. Thus, I precommit to:
a) either donating 50% of my donations to an object-level cause, or
b) in the face of ongoing object-level indecision, put 50% of my donations into a donor-advised fund or the GWWC Trust earmarked specifically and only for future donation to an object-level effective charity, to be fulfilled when I select a cause.
I follow in the footsteps of Peter and Jeff, and I encourage you to do the same. Also, Peter, thanks for this! This development makes me much less nervous and stressed about my role as a donor, perhaps the biggest struggle I've faced with effective altruism.
For me, I think the mistake is putting "movement growth" and "cause prioritization" under the same conceptual cause as "meta". Sure, they're both meta, but in practice carrying them out is very different. It's as difficult to compare movement growth and cause prioritization as it is to directly compare any two object-level causes of effective altruism. I don't know how much this has confused effective altruists when thinking about supporting meta-projects, but I wouldn't be surprised if it's a lot. Defusing that confusion could undo much suboptimal thinking. For example, a funder of the Centre of Effective Altruism might realize, when making explicit their expected value calculations, they expect the Global Priorities Project to have a much greater impact than Effective Altruism Outreach, or visa-versa. Or, maybe they'd end up determining 80,000 Hours would do better than either. In that case, it would make sense to earmark a donation to the CEA as limited to one project rather than usable across all of them. Regardless of what cause or charity an effective altruist selects for donation, I want them to make fewer mistakes in how they think about that. This is probably a mistake I and others have already made. To the end of fixing that, I'd like us to unpack "meta" as a cause, and discriminate between its facets more.
Do we have a rigorous estimate for the cost effectiveness of any intervention aimed at encouraging people to join the EA movement? This could be very useful information when deciding how to allocate money between movement building organizations and direct work organizations. This is obviously only one consideration, but an RCT that gives us a cost effectiveness number would be quite valuable.
I find that all the EA organizations (GiveWell, GWWC, The Life Your can Save) essentially point to the same 4-5 organizations: AMF, SCI, Evidence Action/DtW, etc. Do we really need so many organizations? if they are reaching out to different audiences (GiveWell = finance industry; GWWC = college students), then there's a reason for all to exist - as they broaden the donor base for AMF for example. But if they are reaching out to the same audience (basic EA giver) but with slightly different perspectives, then it's not as useful as in the end, they are persuading the same donor i.e. taking share from each other.
A couple of suggestions for EAs to track effectiveness and prevent the zero-sum game (apologies if this is already in place)
In addition to looking at money moved, etc, consider who is moving the money - students? hedge fund managers? That will help to see if they are reaching their "target audience". GWWC would have a high base of students who would not be donating a lot now, plan to donate a lot in the future. GW would have a high proportion of people earning >200k+ (they exist!) who would donate at least 20-50k a year. The more a particular person donates, the more information they may need to know (and this is actual money not a pledge). I know when i started donating more than the token amount I was spending on a drink in a restaurant, I suddenly became much more aware of whether I was donating to the right cause.
Depending on the audience, the "marketing" would be different. An investor looking to shift millions would need the kind of lengthy research that GiveWell specializes in (and GW should be tracking that, the average size of money moved through them should be substantially higher than GWWC for instance), GWWC may need higher penetration/lower touch ways of getting in touch. Maybe not the focus on pledges but follow-ups with students 5 years down the line when they have started earning a lot to ensure they are actually following through.
I'm not sure your #1 is really an instance of the conjunction fallacy (which is having a higher credence in a conjunction---BANK TELLER & FEMINIST---than in a single conjunct---BANK TELLER...). I might call it the "Outsourcing Fallacy": the belief that it's always better go meta and get someone else to do first-order work (here, donation). Obviously that's not true, though: if it costs me $5 to get each of two people to donate $1, I should have just avoided the exercise and gone first-order.
There are well-understood explanations for when and why people fall victim to the conjunction fallacy. Why do people engage in the outsourcing fallacy? A simple answer: doing so gives me evidence that my influence over other is greater than it is, which is good for my ego?
We as effective altruists either considering launching or supporting meta-projects need to figure out:
How to make calculations of probability chains we actually feel we can rely on. How would we figure this out? I'd guess you'd take lessons from How To Measure Anything, and then get good at Bayesian thinking? I don't know, though I figure this is something we could seek help from CFAR and/or the rationalist community in figuring out how to do.
Actually doing them and publishing them for feedback before any of us launch the project. Where there is a bottleneck, or feedback from the community otherwise convinces founders there is a weak link in the probability chain, they can refine or change the pland to improve the expected odds of success.
As funders or supporters of the such a meta-charity or meta-project, we'd do best to demand a final and meticulous draft, in the form of a report or something, building on the calcuation in step (2) be published for scrutiny before going forward with funding.