I want to point out that there's something unfair that you did here. You pointed out that AI safety is more important, and that there were two doctors that left medical practice. Ryan does AI safety now, but Greg does Biosecurity, and frankly, the fact he has an MD is fairly important for his ability to interact with policymakers in the UK. So one of your examples is at least very weak, if not evidence for the opposite of what you claimed."A reliable way to actually do a lot of good as a doctor" doesn't just mean not practicing; many doctors are in research, or policy, making a far greater difference - and their background in clinical medicine can be anywhere from a useful credential to being critical to their work.
I agree that we agree ;)I particularly endorse the claim about tractability and effectiveness of technical changes to internal nuclear weapon security and contingency planning, both with moderate confidence.
It's not contradictory, but it seems like your comment goes against his post's insistence on the nuance. Will was being careful about this sort of absolutism, and I think at least part of the reason for doing so - not alienating those who differ on specifics , and treating out conclusions as tentative - is the point I am highlighting. Perhaps I'm reading his words too closely, but that's the reason I wrote the introduction the way I did; I was making the point that his nuance is instructive.
I think it would be good to be clearer in our communication and say that we don't consider local opera houses, pet sanctuaries, homeless shelters, or private schools to be good cause areas, but there might be other good reasons for you to donate to them.
I made a similar claim here, regarding carbon offsets:https://forum.effectivealtruism.org/posts/brTXG5pS3JgTatP7i/carbon-offsets-as-an-non-altruistic-expense
At least for people I know it seems to have been really good advice, at least the doctor part.
It seems like this is almost certain to be true given post-hoc selection bias, regardless of whether or not it is good - it doesn't differentiate between worlds where it is alienating or bad advice, and some people leave the community, and ones where it is good.
Strongly agree substantively about the adjacency of your point, and about the desire for a well-rounded world. I think it's a different thread of thought than mine, but it is worth being clear about as well. And see my reply to Jacob_J elsewhere in the comments, here, for how I think that can work even for individuals.
I think that negative claims are often more polarizing than positive ones, but I agree that there is a reason to advocate for a large movement that applies science and reasoning to do some good. I just think it already exists, albeit in a more dispersed form than a single "EA-lite." (It's what almost every large foundation already does, for example.) I do think that there is a clear need for an "EA-Heavy," i.e. core EA, in which we emphasize the "most" in the phrase "do the most good." My point here is that I think that this core group should be more willing to allow for diversity of action and approach. And in fact, I think the core of EA, the central thinkers and planners, people at CEA, Givewell, Oxford, etc. already advocate this. I just don't think the message has been given as clearly as possible to everyone else.
If you're pledging 10% of your income to EA causes, none of that money should go the local opera house or your kid's private school. (And if you instead pledge 50%, or 5%, the same is true of the other 50%, or 95%.)What you do with the remainder of your money is a separate question - and it has moral implications, but that's a different discussion. I've said this elsewhere, but think it's worth repeating:Most supporters of EA don't tell people not to go out to nice restaurants and get gourmet food for themselves, or not to go the the opera, or not to support local organizations they are involved with or wish to support, including the arts. The consensus simply seems to be that people shouldn't confuse supporting a local museum with attempting to effectively maximize global good with effective altruism.
I think I agree with you on the substantive points, and didn't think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.