I've seen and heard many discussions about what EAs should do. William McAskill has ventured a definition of Effective Altruism, and I think it is instructive. Will notes that "Effective altruism consists of two projects, rather than a set of normative claims." One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid. This is a technical point, and one which might seem irrelevant to practical concerns, but I think there are some pernicious consequences of some of the normative claims that get made.
So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful. Will's careful definition avoids that harm, and I think should be taken seriously in that regard.
Mistaken Assertions
Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints. This is not because they cannot be justified, but because they can be strategic mistakes. Specifically, we should be wary of making the project exclusive rather than inclusive.
EA is Young, Small, and Weird
EA is very young. Some find this an obvious situation - aren't most radical movements young? Are most people willing to embrace new ideas young? - but I disagree. Many of the most popular movements sweep across age groups. Environmentalism, Gay rights, and Animal welfare all skewed young, but were increasingly adopted by those of all ages. In part, that is because they allow people to embrace them. There is no widespread belief in environmentalism that doctors have wasted their careers focusing on saving lives at the retail level rather than saving the world. There is little reason that anyone would hesitate the raise the pride flag because they are not doing enough for the movement. But effective altruism is often perceived differently.
To the extent that EAs embrace a single vision (a very limited extent, to be clear,) they often exclude those who differ on details, intentionally or not. "Failing" to embrace longtermism, or ("worse"?) disagreeing about impartiality, is enough to start arguments. Is it any wonder that we have so few people with well-established lives and worldviews willing to consider our project, "the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources"? Nothing about the project is exclusive - it is the community that creates exclusion. And it would be a shame for people to feel useless and excluded.
Of course, allowing more diversity will allow the ideas of effective altruism to spread - but it will also reduce the tension which seems to exist around disagreeing with the orthodoxy. People debate whether EA should be large and welcoming or small and weird. But as John Maxwell suggests, large and weird might be a fine compromise. We see this - the LGBT movement, now widely embraced, famously suggests that people should "let your freak flag fly," but the phrase dates back to the 60s counterculture. Neither stayed small and weird, and despite each leading to a culture war, each seems to have been, at least in retrospect, very widely embraced. And neither needed to develop a single coherent worldview to get there; no-one can argue that LGBT groups all agree about many issues. And despite the fragmentation and arguments, the key messages came through to the broader public just fine.
EA is Already Fragmented
It may come as a surprise to readers of the forum, but many of those pushing forward the EA project are only involved for their pet causes. Animal welfare activists gladly take money, logistical help, strategic guidance, and moral support to do things they have long wanted to do. AI safety researchers may or may not embrace the values of EA, but agree it's a good idea to ensure the world doesn't end in fire. Longtermists, life-extension activists, and biosecurity researchers also have groups and projects which predate EA, and they are happy to have found fellow travelers. None of these are a bad thing.
Even within "central" EA organizations, there are debates about relative priority of different goals. Feel free to disagree with Open Philanthropy's work that prioritizes US Policy - but it's one of their main cause areas. (A fact that has shocked more than one person I've mentioned it to.) Perhaps you think that longtermism is obviously correct, but Givewell focuses mainly on the short term. We are uncertain, as a community, and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions.
Human Variety
Different people have different values, and also have different skills and abilities. When optimizing almost any goal in a complex system, this diversity implies that the optimal path involves some degree of diversity of approaches. That is, most goals are better served by having a diversity of skills available. (Note that this is a positive claim about reality, not a normative one.)
In fact, we find that a diversity of skills are useful for Effective Altruism. Web and graphic designers contribute differently than philosophers and researchers, who contribute differently than operations people, who contribute differently than logistics experts for international shipping, financial analysts, etc. Yes, all of these skills can be paid for on the open market, so some are more expensive than others, but value alignment cannot, and the movement benefits greatly from having value-aligned organizations, especially as it grows.
Hearing that being a doctor "isn't EA" is not just unfortunately dismissive, it's dead wrong. Among EA priorities, doctors have important roles to play in biosecurity, in longevity research, and in understanding how to implement the logistics of vaccine programs. In a different vein, if I had been involved and followed EA advice, I might have gone for a PhD in economics, which I already knew I wouldn't enjoy as much as what I did, in public policy. Of course, just as I was graduating it turned out that EA organizations were getting more interested in policy. That was luck for me, but unsurprising at a group level; of course disparate skills are needed. And a movement that pushes acquiring a narrow set of skills will, unsurprisingly, end up with a narrow set of skills.
Conclusion
I'm obviously not opposed to every use of the word should, and there really are many generally applicable recommendations. I'm not sure how many of them are specific to EAs - all humans should get enough sleep, and it's usually a good idea for younger people to maximize their career capital and preserve options for the future. 80,000 hours seems to tread the balance well, but it seems like many of the readers see "recommended career paths," and think it's a far stronger statement than might be intended.
The narrow vision that seems common when I talk to EAs, and non-EAs that have interacted with EAs, is that there are correct answers that we have for others. This is unhelpful. Instead, think of EA mentorship and advice as suggestions for those that want to follow a "priority" career path. At the same time, we should focus more on continuing to build a vision for, and paths to, improve the world. Alongside that, we have a mutable and evolving program for doing so, one that should (and will) be informed and advanced by anyone interested in being involved.
Acknowledgements: Thank you to Edo Arad for useful feedback.
I'm generally leery of putting words in other people's mouths, but perhaps people are using "bad advice" to mean different things, or at least have different central examples in mind.
There's at least 3 possible interpretations of what "bad advice" can mean here:
A. Advice that, if some fraction of people are compelled to follow it across the board, can predictably lead to worse outcomes than if the advice isn't followed.
B. Advice that, if followed by people likely to follow such advice, can predictably lead to worse outcomes than if the advice isn't followed.
C. Words that can be in some sense considered "advice" that have negative outcomes/emotional affect upon hearing these words, regardless of whether such advice is actually followed.
Consider the following pieces of "advice":
#1 will be considered "bad advice" in all 3 interpretations (it will be bad if everybody treats covid-19 with homepathy(A), it will be bad if people especially susceptible to homeopathic messaging treat covid-19 with homeopathy(B), and also I will negatively judge someone for recommending self treatment with homeopathy(C)).
#2 is only "bad advice" in at most 2 of the interpretations (forcibly eating raw lead nails is bad(A), but realistically I don't expect anybody to listen to such "recommendations" ( B), and this advice is so obviously absurd that context will determine whether I'd be upset about this suggestion (C)).
In context here, if Habryka (and for that matter me) doesn't know any EA ex-doctors who regret no longer being a doctor (whereas he has positive examples of EA ex-doctors who do not regret this), this is strong evidence that telling people to not be doctors is good advice under interpretation B*, and moderate-weak evidence that it's good advice under interpretation A.
(I was mostly reading "bad advice" in the context of B and maybe A when I first read these comments).
However, if David/Khorton interpret "bad advice" to mean something closer to C, then it makes more sense why not knowing a single person harmed by following such advice is not a lot of evidence for whether the advice is actually good or bad.
* I suppose you can posit a selection-effected world where there's a large "dark matter" of former EAs/former doctors who quit the medical profession, regretted that choice, and then quit EA in disgust. This claim is not insane to me, but will not be where I place the balance of my probabilities.