I ran the Forum for three years. I'm no longer an active moderator, but I still provide advice to the team in some cases.
I'm a Communications Officer at Open Philanthropy. Before that, I worked at CEA, on the Forum and other projects. I also started Yale's student EA group, and I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.
Outside of EA, I play Magic: the Gathering on a semi-professional level and donate half my winnings (more than $50k in 2020) to charity.
Before my first job in EA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.
I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". *
I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote.
If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me.
*This is partly because I've stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the "yikes" reaction. But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.
I'll read any reply to this and make sure CEA sees it, but I don't plan to respond further myself, as I'm no longer working on this project.
Thanks for the response. I agree with some of your points and disagree with others.
To preface this, I wouldn't make a claim like "the 3rd edition was representative for X definition of the word" or "I was satisfied with the Handbook when we published it" (I left CEA with 19 pages of notes on changes I was considering). There's plenty of good criticism that one could make of it, from almost any perspective.
It’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.
I agree.
But much, maybe most, of the "essential" reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage. I’d also put “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.
Many of these have ideas that can be applied to either perspective. But the actual things they discuss are mostly near-term causes.
This is different from e.g. detailed pieces describing causes like malaria prevention or vitamin supplementation. I think that's a real gap in the Handbook, and worth addressing.
But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).
However, I may be biased here by my teaching experience. In the two introductory fellowships I've facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.
By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.
I agree that the reading in these sections is more focused. Nonetheless, I still feel like there's a decent balance, for reasons that aren't obvious from the content alone:
“Pascal’s mugging” is relevant to, but not specific to, longtermism
I don't think I've seen Pascal's Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?
"The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se, it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research.
I agree. I wouldn't think of that piece as critical of longtermism.
As far as I can tell, no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).
I haven't gone back to check all the material, but I assume you're correct. I think it would be useful to add more content on this point.
This is another case where my experience as a facilitator warps my perspective; I think both of my groups discussed this, so it didn't occur to me that it wasn't an "official" topic.
Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole.
I agree. That wasn't the purpose of selecting test readers; I mentioned them only because some of them happened to make useful suggestions on this front.
While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated).
I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything.
A 50% useful-response rate isn't bad, and makes me wish I'd sent more of those emails. My excuse is the dumb-but-true "I was busy, and this was one project among many".
(As an aside, if someone wanted to draft a near-term-focused version of the Handbook, I think they'd have a very good shot at getting a grant.)
I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately.
I'd probably have asked "what else should we include?" rather than "is this current stuff good?", but I agree with this in spirit.
(As another aside, if you specifically have ideas for material you'd like to see included, I'd be happy to pass them along to CEA — or you could contact someone like Max or Lizka.)
This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).
I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.
Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited).
Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.
So I don't think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well. Instead, he was doing something that some people want a lot more of and other people don't want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).
In the end, I think it's down to individual users to take their best guess at whether saying "I downvoted" or "I upvoted" would be helpful in a given case. And I'm still not sure whether having more such comments would be a net positive — probably depends on circumstance.
***
Max having a senior position in the community is also a complicated thing. On the one hand, there's a risk that anything he says will be taken very seriously and lead to reactions he wouldn't want. On the other hand, it seems good for leaders to share their honest opinions on public platforms (rather than doing everything via DM or deliberately softening their views).
There are still ways to write better or worse comments, but I thought Max's was reasonable given the balancing act he's trying to do (and the massive support Luke's post had gotten already — I'd feel differently if Max had been joining a pile-on or something).
While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical.
I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn't reply. I also had my version reviewed by a dozen test readers (at least three readers for each section), who provided additional feedback on all of the material.
I incorporated many of the suggestions I received, though at this point I don't remember which came from Michael, Sella, or other readers. I also made many changes on my own.
It's reasonable to argue that I should have reached out to even more people, or incorporated more of the feedback I received. But I (and the other people who worked on this at CEA) were very aware of representativeness concerns. And I think the 3rd edition was a lot more balanced than the 2nd edition. I'd break down the sections as follows:
In the end, you have three "neartermist" sections, four "longtermist" sections (if you count career choice), and one "neutral" section (critiques and counter-critiques that span the gamut of common focus areas).
This is a tricky question to answer, and there's some validity to your perspective here.
I was speaking too broadly when I said there were "rare exceptions" when epistemics weren't the top consideration.
Imagine three people applying to jobs:
I could imagine Bob beating Alice for a "build a new group" role (though I think many CB people would prefer Alice), because friendliness is so crucial.
I could imagine Carol beating Alice for an ops role.
But if I were applying to a wide range of positions in EA and had to pick one trait to max out on my character sheet, I'd choose "epistemics" if my goal were to stand out in a bunch of different interview processes and end up with at least one job.
One complicating factor is that there are only a few plausible candidates (sometimes only one) for a given group leadership position. Maybe the people most likely to actually want those roles are the ones who are really sociable and gung-ho about EA, while the people who aren't as sociable (but have great epistemics) go into other positions. This state of affairs allows for "EA leaders love epistemics" and "group leaders stand out for other traits" at the same time.
Finally, you mentioned "familiarity" as a separate trait from epistemics, but I see them as conceptually similar when it comes to thinking about group leaders.
Common questions I see about group leaders include "could this person explain these topics in a nuanced way?" and "could this person successfully lead a deep, thoughtful discussion on these topics?" These and other similar questions involve familiarity, but also the ability to look at something from multiple angles, engage seriously with questions (rather than just reciting a canned answer), and do other "good epistemics" things.
In August 2014, I co-founded Yale EA (alongside Tammy Pham). Things have changed a lot in community-building since then, and I figured it would be good to record my memories of that time before they drift away completely.
If you read this and have questions, please ask!
Timeline
I was a senior in 2014, and I'd been talking to friends about EA for years by then. Enough of them were interested (or just nice) that I got a good group together for an initial meeting, and a few agreed to stick around and help me recruit at our activities fair. One or two of them read LessWrong, and aside from those, no one had heard of effective altruism.
The group wound up composed largely of a few seniors and a bigger group of freshmen (who then had to take over the next year — not easy!). We had 8-10 people at an average meeting.
Events we ran that first year included:
We also ran some projects, most of which failed entirely:
What it was like to running a group in 2014: Random notes
But mostly, it was really hard
The current intro fellowships aren't perfect, and the funding debate is real/important, but oh god things are so much better for group organizers than they were in 2014.
I had no idea what I was doing.
There were no reading lists, no fellowship curricula, no facilitator guides, no nothing. I had a Google doc full of links to favorite articles and sometimes I asked people to read them.
I remember being deeply anxious before every meeting, event, and email send, because I was improvising everything and barely knew what we were supposed to be doing (direct impact? Securing pledges? Talking about cool blogs?).
Lots of people came to one or two meetings, saw how chaotic things were, and never came back. (I smile a bit when I see people complaining that modern groups come off as too polished and professional — that's not great, but it beats the alternative.)
I looked at my journal to see if the anxious memories were exaggerated. They were not. Just reading them makes me anxious all over again.
But that only makes it sweeter that Yale's group is now thriving, and that EA has outgrown the "students flailing around at random" model of community growth.
I'd recommend cross-posting your critiques of the "especially useful" post onto that post — will make it easier for anyone who studies this campaign later (I expect many people will) to learn from you.
Thanks for sharing all of this!
I'm curious about your fear that these comments would negatively affect Carrick's chances. What was the mechanism you expected? The possibility of reduced donations/volunteering from people on the Forum? The media picking up on critical comments?
If "reduced donations" were a factor, would you also be concerned about posting criticism of other causes you thought were important for the same reason? I'm still working out what makes this campaign different from other causes (or maybe there really are similar issues across a bunch of causes).
One thing that comes to mind is time-sensitivity: if you rethink your views on a different cause later, you can encourage more donations to make up for a previous reduction. If you rethink views on a political campaign after Election Day, it's too late.
If that played a role, I can think of other situations that might exert the same pressure — for example, organizations running out of runway having a strong fundraising advantage if people are worried about dooming them. Not sure what to do about that, and would love to hear ideas (from anyone, this isn't specifically aimed at Michael).
I think that the principal problem pointed out by the recent "Bad Omens" post was peer pressure towards conformity in ways that lead to people acting like jerks, and I think that we're seeing that play out here as well, but involving central people in EA orgs pushing the dynamics, rather than local EA groups. And that seems far more worrying.
What are examples of "pressure toward conformity" or "acting like jerks" that you saw among "central people in EA orgs"? Are you counting the people running the campaign as “central”? (I do agree with some of Matthew’s points there.)
I guess you could say that public support for Carrick felt like "pressure". But there are many things in EA that have lots of support and also lots of pushback (e.g. community-building strategies, 80K career advice). Lots of people are excited about higher funding levels in EA; lots of people are worried about it; vigorous discussion follows.
Did something about the campaign make it feel different?
*****
Habryka expressed concern that negative evidence on the campaign would be "systematically filtered out". This kind of claim is really hard to disprove. If you don't see strong criticism of X from an EA perspective, this could mean any of:
I think that (2) and (4) are more common, and (1) less common, than many other people seem to think. I do think that (3) is common, and I wish it were less so, but I don't see that as "pressure".
If someone had published a post over the last few months titled "The case against donating to the Flynn campaign", and it was reasonably well-written, I think it would have gotten a ton of karma and positive comments — just like this post or this post or this post.
Why did no one write this?
Well, the author would need (a) the time to write a post, (b) good arguments against donating, (c) a motive (improving community epistemics, preventing low-impact donations, getting karma), and (d) comfort with publishing the post (that is, not enough self-censorship to override (c)).
I read Habryka as believing that there are (many?) people who fulfill (a), (b), and (c) but are stopped by (d). My best guess is that for many issues, including the Flynn campaign, no one fulfilled all of (a), (b), and (c), which left (d) irrelevant.
I'm not sure how to figure out which of us is closer to the truth. But I will note that writing a pseudonymous post mostly gets around (d), and lots of criticism is published that way.
(If you are someone who was stopped by (d), let me know! That's really important evidence. I'm also curious why you didn't write your post under a pseudonym.)*
I also hope the red-teaming contest will help us figure this out, by providing more people with a reason to conduct and publish critical research. If some major topic gets no entries, that seems like evidence for (b) or (d), though with the election over I don't expect anyone to write about the Flynn campaign anyway.
*I've now heard from one person who said that (d) was one factor in why they didn't leave comments — a mix of not wanting to make other commenters angry and not wanting to create community drama (the drama would happen even with a pseudonym).
Given that this response came in soon after I made my comment, I've updated moderately toward the importance of (d), though I'm still unsure what fraction of (d) is about actual Forum comments vs. the author's reputation/relationships outside of the Forum.
The flower was licensed from this site.
The designer saw and appreciated this comment, but asked not to be named on the Forum.