I think it's valuable to have a low-barrier way for people to engage with EA without having years of experience and spending hours on writing a high-quality post. Do you have any ideas for how to avoid the trade-off between quality and accessibility?
Personally, I find the 'frontpage' and 'curated' filters to work pretty well. Regardless of the average quality of posts, as long as the absolute number of high-quality posts doesn't decrease (and I see no reason why that would happen), filters like these should be able to keep a more curated experience intact, no?
There's been a lot of community building happening in the Netherlands lately, very excited for this!
Besides Will himself, congrats to the people that coordinated the media campaign around this book! Besides the many articles such as the ones in Time, the New Yorker, the New York Times, a ridiculous number of youtube channels that I follow uploaded a WWOTF related video recently.
The bottleneck for longtermism becoming mainstream seems to conveying these inherently unintuitive ideas in an intuitive and high fidelity way. From the first half I've read so far, I think this book can help a lot in alleviating this bottleneck. Excited for more people to become familiar with these ideas and get in touch with EA! I think us community builders are going to be busy for a while.
I think it gave some valuable commentary on the tensions between how people inside and outside the movement view EA. How we deal with these stark contrasts would only become more relevant if EA becomes more mainstream, and the What We Owe the Future release with its ambitious media campaign appears to be gearing up to make a serious push in this direction! Some excerpts related to these tensions:
Money, which no longer seemed an object, was increasingly being reinvested in the community itself. The math could work out: it was a canny investment to spend thousands of dollars to recruit the next Sam Bankman-Fried. But the logic of the exponential downstream had some kinship with a multilevel-marketing ploy. Similarly, if you assigned an arbitrarily high value to an E.A.’s hourly output, it was easy to justify luxuries such as laundry services for undergraduate groups, or, as one person put it to me, wincing, “retreats to teach people how to run retreats.” Josh Morrison, a kidney donor and the founder of a pandemic-response organization, commented on the forum, “The Ponzi-ishness of the whole thing doesn’t quite sit well.”
The community’s priorities were prone to capture by its funders. Cremer said, of Bankman-Fried, “Now everyone is in the Bahamas, and now all of a sudden we have to listen to three-hour podcasts with him, because he’s the one with all the money. He’s good at crypto so he must be good at public policy . . . what?!”
It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists.
Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks. ... Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?
And most eloquently:
From the outside, E.A. could look like a chipper doomsday cult intent on imposing its narrow vision on the world. From the inside, its adherents feel as though they are just trying to figure out how to allocate limited resources—a task that most charities and governments undertake with perhaps one thought too few.
Even if these if there are good arguments for why EA is doing what it is/heading in the direction that it is, will these arguments be communicated with enough fidelity to make the process of EA becoming more mainstream go well? Although the author of this article is looking in from the outside, they did a lot of research and made a very conscious effort to make a proper analysis, which probably makes it closer to a best rather than worst case scenario as far as impressions of EA go. Still very excited to see the reactions of more people becoming familiar with EA, just also a bit anxious.
Self-criticism may be a necessary component of achieving the real goal of improving EA for the better. Yet it's not sufficient. The extent of self-criticism is now excessive.
I completely agree with the first sentence, but am not sure about the second. If more were to be done to implement changes, would you still call the current level of self-criticism excessive? Generating a lot of ideas about how we could better steer the ship before selecting the best ones and actually pulling on the ship's wheel would be crucial, no?
I think this post touches on some really important topics, so thanks a lot for writing it! To push back on some things:
Would you say the same applies to newer university groups? It seems likely to me that following through on the advice of this post would limit the amount of people that hear about EA at your university. If you don't already have a mature university group, a lack of growth means it may never become one, which would be a very steep opportunity cost.
Phrased differently, this post appears to come from the perspective of a mature groups where the impact bottleneck is the organizers and active members setting themselves up for impactful work and projects, as opposed to a less mature groups where the impact bottleneck is finding and reaching out to people that would be interested in EA. If you limit intro talks, fellowships and 1:1s, how much value can you really provide for people that aren't already EAs?
Although the advice in this post could be very beneficial for groups in the second stage, it could possibly be harmful for the multitude of groups in the first stage.
Claim 1: Having the most promising people market EA is inefficient.
For newer groups, if the most engaged and promising people don't market EA, it is likely no one will. To change that, you need to build up a group and find various organizers, which requires growth, which can be hard to achieve without marketing.
Claim 2: Too much marketing causes bad epistemics in the group.
Can't say I've noticed this much, personally. From speaker events to fellowships and book clubs, marketing will generally point an event or program where exploring ideas and skilling up are central (though admittedly not necessarily for the organizers).
Especially if the organizers doing the outreach don’t have a good understanding of problems themselves, they might be perceived as unconvincing by the epistemically-rigorous people they want to attract.
Doesn't this also function as an argument for why the marketing should be done by the most engaged and promising people?
Claim 4: Leaders marketing EA too much causes bad perceptions of EA around campus.
I agree that the difficulty of accurately conveying what EA is and does through the brief moments of first impressions definitely poses a risk for reputation around campus. However, I feel this could also be used as an argument for spending more rather than less time thinking through how you signal and market EA.
Perhaps a good takeaway from all this is that marketing for university groups should ultimately be self-defeating, rather than self-reinforcing. To use marketing as a means of creating a solid core group of people interested in EA, after which it can take a backseat in the list of priorities and skilling up this core group becomes the focus.
Finally, I feel like a lot of this could be avoided by creating standardized pipelines for marketing EA and setting up the digital infrastructure (think of website, linkedin, instagram, facebook, slack, discord, circle, mailing list, announcement chat, calendar for events, calendly for 1:1s, sharable QR code to a linktree, and perhaps most importantly, which ones of these you even need in the first place). This would both free up a lot of time for organizers, as well as allow for fine tuning the messaging to the extent that this is possible. Luckily it appears this is being worked on.
To add to this, I would like to emphasize the lack of reasoning transparency in the current estimates as one of our main concerns - and not just the estimates of the value of additional high-impact EAs - but especially those of the value of community building roles at (top) universities and to what degree university groups 'create' these high-impact EAs (which potentially becomes even more dubious when HEAs are used as a proxy metric for impact, for reasons similar to your first bullet point).
We originally had these estimates as the main red teaming topic in mind, but we soon figured out there wasn't enough substance to turn this topic into a red team by itself, as the estimates mainly seemed to stem from guesstimates.
Very excited to see how this major nation-wide community building push will turn out! As someone that has gotten involved with setting up several of these new local groups, it has been really motivating and encouraging to have this support network of other organizers starting groups around the same time. If you run into a problem, there's a good chance others are struggling with the same issues and you can collaborate on solutions! In my experience this lowers the barrier for getting involved as an organizer quite considerably, and I'm predicting it will make it easier to find future organizers as well.
Collaborations like these have so much potential! Because of this video alone, millions will learn about these ideas and many thousands will get inspired by them. Very glad Open Phil pursued this and hoping to see more efforts like this.
Agreed, this appears to be the most neutral interpretation. Since the marginal value of increasing "EA coffers" depends on what EA as a whole spends its money on, it could function as a pretty useful metric for intuitively communicating value across cause areas imo.A disadvantage might be that it's not a very concrete metric, unlike something like the QALY. Additionally, someone needs to have a somewhat accurate understanding of what the funding distribution in EA looks like (and what the funding at the margin is being used on!) for this metric to make any sense.