The goal of this short-form post: to outline what I see as the key common ground between the “big tent” versus “small and weird” discussions that have been happening recently and to outline one candidate point of disagreement.
Tl;dr:
* Common ground:
* Everyone really values good thinking processes/epistemics/reasoning transparency and wants to make sure we maintain that aspect of the existing effective altruism community
* Impact is fat-tailed
* We might be getting a lot more attention soon because of our increased spending and because of the August release of "what we owe the future" (and the marketing push that is likely to accompany it's release)[1]
* A key point of disagreement: Does focusing on finding the people who produce the “tail” impact actually result in more impact?
* One reason this wouldn’t be the case: “median” community building efforts and “tail” community building efforts are complements not substitutes. They are multipliers[2] of each other, rather than being additive and independent.
* The additive hypothesis is simpler so I felt the multiplicative hypothesis needed some outlined mechanisms. Possible mechanisms:
* Mechanism 1: the sorts of community building efforts that are more “median” friendly actually help the people who eventually create the “tail” impact become more interested in these ideas and more interested in taking bigger action with time
* Mechanism 2: our biggest lever for impact in the future will not be the highly dedicated individuals but our influence on people on the periphery of the effective altruism (what I call “campground” effects)
Preamble (read: pre-ramble)
This is my summary of my vibe/impressions on some of the parts of the recent discussion that have stood out to me as particularly important. I am intending to finish my half a dozen drafts of a top-level post (with much more explanations for my random jargon that isn’t always even that common in effective altruism circles) at some point but I thought I’d start but sharing these rough thoughts to help get me over the “sharing things on the EA forum is scary” hump.
I might end up just sharing this post as a top-level post later once I’ve translated my random jargon a bit more and thought a bit more about the claims here I’m least sure of (possibly with a clearer outline of what cruxes make the “multiplicative effects” mechanisms more or less compelling)
Some common ground
These are some of my impressions of some claims that seem to be pretty common across the board (but that people sometimes talk as though they might suspect that the person they are talking to might not agree so I think it’s worth making them explicit somewhere).
1. The biggest one seems to be: We like the fact that effective altruism has good thinking processes/epistemics a lot! We don’t want to jeopardize our reasoning transparency and scout mindsets for the sake of going viral.
2. Impact is fat-tailed and this makes community-building challenging: there are a lot of uncomfortable trade-offs that might need to be made if we want to build the effective altruism community into a community that will be able to do as much good as possible.
3. We might be getting a lot more attention very soon whether we want to or not because we're spending more (and spending in places that get a lot of media attention like political races) and because there will be a big marketing push for "What We Owe the Future" to, potentially, a very big audience. [3]
A point of disagreement
It seems like there are a few points of disagreement that I intended to go into, but this one got pretty long so I’ll just leave this as one point:
Does focusing on “tail” people actually result in more impact?
Are “tail” work and “median” work complements or substitutes? Are they additive (so specialization in the bit with all the impact makes sense) or multiplicative (so doing both well is a necessary condition to getting “tails”)?
I feel like the “additive/substitutes” hypothesis is more intuitive/a simpler assumption so I’ve outlined some explicit mechanisms for the “multiplicative/complements” hypothesis.
Mechanisms for the “multiplicative/complements” hypothesis
Mechanism 1
“Tail” people often require similar “soft” entry points to “non-tail” people and focusing on the “median” people on some dimensions actually is better at getting the “tails” people because we just model “tail” people wrong (e.g. someone could think it looks like some people were always going to be tails, but in reality, when we deep-dive into individual “tail” stories, there was, accidentally, “soft” entry points).
The dimensions where people advocate for lowering the bar are not epistemics/thinking processes, but things like
1. language barriers (e.g. reducing jargon, finding a plain English way to say something or doing your best to define the jargon when you use it if you think it’s so useful that it’s worth a definition),
2. making it easier for people to transition at their own pace from wherever they are to “extreme dedication” (and being very okay with some people stopping completely way before, and
3. reducing the social pressure to agree with the current set of conclusions by putting a lot more emphasis on a broader spectrum of plausible candidates that we might focus on if we’re trying to help others as much as possible (where “plausible candidates” are answers to the question of how we can help others the most with impartiality, considering all people alive today [4] or even larger moral circles/circle of compassion than that too, where an example of a larger group of individuals we might be wanting to help is all present and future sentient beings)
Mechanism 2
As we get more exposure, our biggest lever for impact might not the people that get really enthusiastic about effective altruism who go all the way to the last stop of the crazy train (what I might call our current tent), but the cultural memes we’re spreading to friends-of-friends-of-friends of people who have interacted with people in the effective altruism community or with the ideas and have strong views about them (positive or negative), which I have been calling “our campground” in all my essays to myself on this topic 🤣.
E.g. let’s say that the only thing that matters for humanity’s survival is who ends up in a very small number of very pivotal rooms,[5] it might be much easier to influence a lot of the people who are likely to be in those rooms a little bit to be thinking about some of the key considerations we hope they’d be considering (it’d be nice if we made it more likely that a lot of people might have the thought; “a lot might be at stake here, let’s take a breather before we do X”) than to get people who have dedicated their lives to reducing X-risk because effective altruism-style thinking and caring is a core part of who they are in those rooms.
As we get more exposure, it definitely seems true that “campground” effects are going to get bigger whether we like it or not.[6]
It is an open question (in my mind at least) whether we can leverage this to have a lot more impact or whether the best we can do is sit tight and try and keep the small core community on point.
1. ^
As a little aside, I am so excited to get my hands on a copy (suddenly August doesn't seem so soon)!
2. ^
Additive and multiplicative models aren't the only two plausible "approximations" of what might be going on, but they are a nice starting point. It doesn't seem outside the range of possibility that there are big positive feedback loops between "core camp efforts" and "campground efforts" (and all the efforts in between). If this is plausibly true, then the "tails" for the impact of the effective altruism community as a whole could be here.
3. ^
this point of common ground was edited in after first posting this comment
4. ^
This is a pretty arbitrary cutoff of what counts as a large enough moral circle to count under the broader idea behind effective altruism and trying to do the most good, but I like being explicit about what we might mean because otherwise people get confused/it’s harder to identify what is a disagreement about the facts and what is just a lack of clarity in the questions we’re trying to ask.
I like this arbitrary cutoff a lot because
1) anyone who cares about every single person alive today already has a ginormous moral circle and I think that’s incredible: this seems to be very much wide enough to get at the vibe of the widely caring about others, and
2) the crazy train goes pretty far, it is not at all obvious to me where the “right” stopping point is, I’ve got off a few stops along (my shortcut to avoid dealing with some crazier questions down the line, like infinite ethics, is “just” considering those in my light cone where it be simulated or not😅, not because I actually think this is all that reasonable, but because more thought on what “the answer” is seems to get in the way of me thinking hard about doing the things I think are pretty good which I think, on expectation, actually does more for what I’d guess I’ll care about if I had all of time to think about it.
5. ^
this example is total plagiarism, see: https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/ (also has such a great discussion on multiplicative type effects being a big deal sometimes which I feel people in the effective altruism community think about less than we should: more specialization and more narrowing the focus isn't always the best strategy on the margin for maximizing how good things are and will be on expectation, especially as we grow and have more variation in people's comparative advantages within our community, and more specifically, within our set of community builders)
6. ^
If our brand/reputation has lock-in for decades for a really long time, this could plausibly be a hinge of history moment for the effective altruism community. If there are ways of making our branding/reputation is as high fidelity as possible within the low-fidelity channels that messages travel virally, this could be a huge deal (ideally, once we have some goodwill from the broader "campground" we will have a bit of a long reflection to work out what we want our tent to look like 🤣😝).