Animal welfare work has the potential to be much more cost-effective than work on global poverty. While it depends greatly on how much you value a human compared to a nonhuman animal, the suffering in factory farms appears quite severe and the scale of factory farmed animals ( ~9-11B in the US, many more globally[1]) is greater than the total world population, over 10x the number of extremely poor people, and over 40x the number of people affected by malaria.
Based on this, some have suggested that the only reason to think that animal welfare doesn’t dominate global poverty is speciesism, or the belief that nonhuman animals do not have significant moral worth.
However, another reason to think that global poverty work could be more effective than animal welfare work is based on strength of evidence -- we have enough evidence to know the very best global poverty intervention, but we don’t have enough evidence to know the very best animal welfare intervention.
In this article I want to take a look at what this might mean in practice -- when you have strength of evidence for global poverty but not for animal welfare interventions, you likely aren’t comparing the best animal welfare intervention to the best global poverty intervention. Instead you are likely comparing the mean animal welfare intervention to the best poverty intervention.
Furthermore, this could entail that global poverty is better now[2], since there are reasons to think the mean animal welfare intervention could be quite worse than the best global poverty intervention.
Keep in mind, however, that I think it could entail this conclusion, not that it actually does. I use "X could be true" in the sense of it is possible that X or it is reasonable for some people to think X based on what we currently know. I do not use it in the sense that X is more likely than not or I believe X and you should too.
Also, even if global poverty could be better right now in the abstract, there are still many additional considerations I don’t write about here, such as thinking about marginal funding, thinking about counterfactuals, thinking about long-term flow through effects, thinking about the value of research or meta-work, etc.
The Range of Global Poverty Interventions
In “On Priors”, Michael Dickens graphed the list of cost-effectiveness estimates from the DCP2 and found an exponential curve in terms of $/DALY (blue is the minimum estimate, red is the maximum estimate):
My own analysis of the raw data provided shows the minimum estimates distributed with a mean of $804.78/DALY, a median of $313.50/DALY, a min of $1/DALY, a max of $5588/DALY, and a standard deviation of $1250.12/DALY. The maximum estimates are distributed with a mean of $3557.66/DALY, a median of $929/DALY, a min of $5/DALY, a max of $26813/DALY, and a standard deviation of $6738/DALY.
While there are good reasons to not take the DCP2 estimates too literally, we’re lucky there’s a large wealth of research on global health interventions which allow us to make reasonable attempts at ranking different interventions in order of their cost-effectiveness.
If we did not have this research and had to sample an intervention at random, we would end up with the mean intervention with a potential cost-effectiveness of ~$805.78-$3557.66/DALY. Using the DCP2, we can select an intervention with a potential cost-effectiveness of ~$1-5/DALY, a potential gain of over 700x!
Comparing to Animal Welfare Work
Comparatively, there is very little research on how to best improve animal welfare, and what research that does exist has historically lacked control groups, been statistically underpowered, and suffered from many other problems (see Section 4 of “Methodology for Studying the Influence of Facebook Ads on Meat Reduction” for a good review).
The largest scale RCT on animal advocacy to date was only powered enough to rule out a rate of 4.2% or higher at 80% confidence , though we aren’t sure how much lower the true conversion rate is or if there are other interventions with a bigger success rate. If we were to come up with some sort of DALY vs. intervention graph for animal rights, what would it look like?
We’re likely smart enough to exclude interventions that are likely to be quite inferior from a scale perspective, like farm sanctuaries, we still likely face a large range of potential cost-effectiveness. While we don’t know yet if the shape is logarithmically distributed like it appears to be for global health interventions[3], it seems to me that a lot of the interventions that “smart money” would pick include the possibility of no impact (being actually worthless) and net negative impact[4] (actively causing more harm per each dollar spent), even before considering their possible far-future effects.
This suggests that while the best intervention in animal rights could exceed that of global health by ~250x[5], the mean intervention could be much worse than the best intervention from global poverty. Since we don’t have enough evidence yet to pinpoint the best animal welfare intervention, we’re in the same situation as before where we are staring at an unlabeled graph, forced to pick the mean intervention.
Extending to the Far Future
At this point there’s still an open question about how to extend this to the far future. This is quite hard for reasons that Michael Dickens points out in “Are GiveWell Top Charities Too Speculative?” -- while we might have a pretty good idea of what the near-term effects of GiveWell top charities are (namely, less malaria, less parasite infections, and more wealth) and we might have some idea of the medium-term effects (more economic growth, essentially no net population growth), we have no idea of the long-term effects and this could dramatically change the overall cost-effectiveness.
This undermines the “strength of evidence” approach in favor of global poverty, but there still are many plausible views one could make that suggest global poverty comes out ahead. For example, one could reasonably think that…
-
Economic growth is likely a net good and animal welfare work is undermined by not having much of an effect on that growth .
-
The effects of animal advocacy on long-term values shifting are not as large as believed or are particularly fragile and unlikely to propagate across generations .
-
The flow-through effects are unlikely to be larger than the direct effects, which, in global poverty, are more clearly good.
It’s not clear to me which of these views, if any, are correct, and I hope to explore them a lot more, including the many other view that I did not write down here. However, it is clear to me that one could reasonably think that global poverty is more effective than animal advocacy, even while agreeing that nonhuman animals have significant moral value, based on the principle of comparing the best intervention to the mean intervention.
Edit: This post originally incorrectly assumed that randomly selecting from all possible interventions would yield the median cost-effectiveness, not the mean cost-effectiveness.
Endnotes
[1]: I’ve heard numbers around 60B (for one example, from ACE, though I’ve also heard it elsewhere), but I’ve never been able to track down an authoritative citation for this (nor have I tried particularly hard). However, I don’t think the precise number is that important for this analysis.
[2]: I think this analogy can be similarly extended to most other causes where there isn’t much evidence yet to pick among a large range of potential interventions, many of which are of zero impact or net negative.
[3]: I’m pretty curious what the overall shape of interventions plotted against cost-effectiveness would look like. I think in the case of nonhuman animal advocacy there could be reasons to think that the shape could look pretty weird if there is a large possibility of net-negative effects[4] or if many messages had roughly the same outcome (for example, maybe online ads and pamphlets are roughly equally persuasive).
[4]: This would be possible if, for example, it were true that caged-free campaigns result in a lower living standard for hens (though see the response from Bollard and ensuing discussion ) or if Direct Action Everywhere’s confrontational activism approach actually drove people away from animal rights. Both of these seem plausible enough to me to introduce negative values into my confidence interval for their cost-effectiveness estimates, even if I don’t think they are more likely than not.
[5]: For example, if the true conversion rate of online ads happens to be 3%, this suggests ~144-80388 days of animal suffering averted per dollar (using the “Simple Calculator”, fixing conversions / pamphlet at 0.03). If we assume humans are worth 1-300x more than nonhumans, this crudely suggests an estimate of 0.001-220 DALY / $. Flipping that to $/DALY would be $0.004 - $1000 / DALY.
I agree with the general principle: non-robust estimates should be discounted, and thus areas where the evidence is less robust (e.g. animal welfare, and probably a fortiori far future) should be penalized compared to areas with more robust estimates (e.g. global poverty) in addition to any 'face value' comparison.
I also agree it is plausible that global poverty interventions may be better than interventions in more speculative fields because we are closer to selecting randomly from these wide and unknown distributions in the latter case, so even if the 'mean' EV from a global health charity is << than the mean EV of (say) a far future cause, the EV of a top Givewell charity may be higher than our best guess for best animal welfare cause, even making generous assumptions about the ancillary issues required (e.g. inter-species welfare comparison, pop ethics, etc. etc. etc.)
However, I think your illustration is probably slanted too unfavourably towards the animal welfare cause, for two reasons.
Due to regression to mean and bog standard measurement error, it seems likely that estimates of the best global poverty interventions will be biased high, ditto any subsequent evaluation which 'selects from the top' (e.g. Givewell, GWWC). So the actual value of best measured charity will be less than 70x greater than the actual mean.
I broadly agree with your remarks about the poverty of the evidence base in animal welfare interventions. However it seems slightly too conservative to discard all information from (e.g.) ACE entirely.
The DCP data does give fair evidence that the distribution for global poverty interventions is approximately log normal and I'd guess it's mean is fairly close to the 'true' population value. It is unfortunate that there is no similar work giving approximate distribution type or parameters for animal welfare/global poverty/ anything else causes. I would guess it is also lognormally distributed-ish (there or there abouts, I agree with your remarks about plausibly negative values) although with an unclear mean.
I have been working on approaches for corecting regression to the mean issues. Although the results are mathematically immature and probably mistaken (my previous attempt was posted here, which I hope to return to in time - see particularly Cotton-Barratt's remarks), I think the two qualitative takeaways are important. 1) With heavy tailed (e.g. log-normal) distributions, regression to the mean can easily knock orders of magnitude off the estimate for 'best performing' interventions, 2) regression to the mean can bite much more (namely, orders of magnitude more) off a less robust estimate than a more robust estimate.
For these reasons comparing (e.g.) top Givewell charity estimates to ACEs effectiveness estimates are probably illegitimate, as the latter's estimates will probably have much greater expected error, in part due to being a smaller org with less human and capital resources to devote to the project, and (probably more importantly) the considerably worse evidence base they have to work with. For similar reasons arguments of the form 'I did a fermi estimate with conservative assumptions, and it turns out X has a QALY yield a thousand/million/whatever times greater than Givewell top charities, therefore X is better' warrant withering scepticism.
How to go further than this likely requires distributional measures we are unlikely to get good access to save for global poverty. There is some research one could perform to get a handle on regression the mean, and potentially some analytic or simulation methods to estimate 'true' effectiveness conditioned on the error prone estimate, and I hope to attack some this work in due course. For other fields, similar to Dickens, one may conjecture different distributions and test for sensitivity, my fear is the results of these analyses will prove wholly sensitive to recondite statistical considerations.
I also agree with your remarks that global poverty causes may prove misleadingly robust given the challenge of estimating flow through effects and differential impact given varying normative assumptions. Thus 'true' EV of even the most robustly estimate global poverty cause likely has considerable error, and these errors may not be easy to characterize, and plausibly in many cases 'pinning down' the relevant variables may demand a greater sample size than can be obtained prospectively within the Earth's future light-cone. I leave as an exercise to the reader how this may undermine the method within EA for relying on data, cost effectiveness estimates, and so forth.
I think this is an imperfect comparison since finding the most effective health intervention isn't equivalent to finding the most effective animal charity; rather, finding the most effective health intervention is equivalent to finding the cheapest website to put up vegan ads. The true uncertainty over poverty relief - the way it interacts with governance systems and dictates long term development trends - necessarily needs to be included as well. And we don't know much about which poverty charities are best under that metric. I don't think it's right to imply that it is a vague or optional "far future" concern, as it's a concrete issue that can't be ignored.
Moreover, why refer to the median animal intervention as the baseline? If we want the expected value of picking a charity at random, we'll want to use the average. And if there are a few interventions which are 250x as good as the best poverty charities with most of them being ineffective, then that will strongly improve the estimate above what we would get if using the median.
Besides, we're not totally blind about animal charities and we can still make some improvements over random charity selection. I think it's reasonable to expect that we have at least a 1-in-250 chance of selecting the best animal charity. If so, then we should expect animal charity to be better than human charity on average.
Note that if this does turn out to be a close decision then the meat eater problem becomes significant again. The typical response to problems of meat consumption is that anyone who cares sufficiently about animals will donate to animal charities anyway; however, if the OP's reasoning is correct then we now have to take it seriously into consideration.
The biggest takeaway here is that animal charity research is a really good cause.
"The biggest takeaway here is that animal charity research is a really good cause."
I agree - if we're highly certain we've found the best poverty interventions, or close to, and the best animal interventions might be ~250x as effective as the best poverty interventions, that should argue for increased animal charity research. But Peter is definitely right in that the higher robustness of existing human interventions (ignoring flow on effects like the poor meat eater problem) is a potentially valid reason to pick poverty interventions now over animal interventions now.
Yep, you're right... I got confused about that... :/ Updated the post!
-
Of course... and I don't intend to ignore it; in fact I explicitly dedicate a section to it. But it's hard to write a comprehensive article about that right now and I thought it would be good to get this thought out now with a bunch of caveats.
-
It certainly is an imperfect comparison, but it's not as imperfect as what you suggest. While there are many problems with DCP and while DCP does not take into account flow through effects, it at least takes into account whether there is reasonable first pass evidence that the intervention works, which we definitely don't have for veg ads.
To oversimplify again, imagine there are three things we need to know:
1.) Does veg ads / water purification work? 2.) What is the cheapest way to run veg ads / water purification? What's the cost-effectiveness? 3.) What effects do veg ads / water purification have in the long run? How large are these effects relative to the direct effects?
For water purification, we mostly have 1 and 2, whereas for veg ads we have some guesses at 2 and very little work on 1.
Of course, 3 may dominate and make work on 1 or 2 moot.
-
Yes, I believe I account for that in the post. Likewise, DCP is not a purely random selection of global poverty charities.
-
I'm not sure how that follows. Could you sketch that out for me?
-
Yes, it seems pretty plausible that could be a crucial consideration for picking between global poverty and animal welfare interventions right now. However, I don't think everyone who is concerned about the poor meat eater problem donates to animal charities, though I guess it depends on what you mean by cares sufficiently.
I know, I'm just saying that the method of presentation is sort of problematic because it implies that the future effects are a secondary or unnecessary concern. Some people think that it's silly to worry about the effect of interventions on e.g. social systems 200 years from now, so they ignore far future effects. But the effects of poverty interventions upon social directions and governance within a 25-year timeframe are a different story and shouldn't be lumped in with the former kinds of issues.
Well if the best animal charities are 250x better than the best human charities then a 1-in-250 chance of picking the best animal charity implies that it's just as good as completely certain donations to the best human charity.
That would only be true if there were no charities of negative value that you might accidentally pick.
Assuming a reasonable prior about the effects of charities, if there are a few at 250x then there are also more at 200x, 150x and 100x that we are likely to fund, but the chance that we would accidentally pick a harmful charity when we think we are picking the best one is tiny if we know anything about charities. Even granting an assumption of being totally ignorant about charities and picking randomly, to argue that human charities are better you would have to assume that for every effort which is +250, there is an effort which is at least as bad as -249, and for every effort which is +200, there is an effort which is at least as bad as -199, or at least an average which has the same effect - with almost half of animal charities being net negative.
Note that in that case you would be arguing that the vast majority of the perceived superiority of animal charities is due simply to variance. That seems false because we have strong reasons to expect animal charities to be fundamentally more effective due to the neglectedness of the issue and the intensity of the problem, and I don't see any prior reason to expect animal charities to be more variable in effectiveness than human charities.
You don't necessarily have to assume the impacts are normally distributed around 0 -- they could take a wide variety of distributions.
-
Why not? It seems much easier to be accidentally counterproductive in animal rights advocacy than in global health.
Yes, like I said: "or at least an average which has the same effect". Whatever distribution you assume would be implausible. Either there's just a few animal charities which are horrifically bad, like thousands of times worse than the best human charities are good... or the vast, vast majority of animal charities account for some kind of moderate harm.
Global health efforts do have controversial outcomes, and animal advocacy efforts are mostly advancing on mutually supporting fronts of changing ideas and behavior. I really don't see where this seeming-ness comes from, especially not the degree of seeming-ness that would be needed to indicate that the variance of animal charities is ten or twenty or a hundred times greater than that of human charities.
I'm glad someone's getting something out of the two hours I spent manually copying all the DCP2 estimates :)
It looks like you're using the mean of the DCP2 estimates to estimate the expected value of funding an intervention at random. I believe the correct way to do this would be to take the reciprocals of the cost-effectiveness estimates and then take their arithmetic mean, since that tells us the actual expected value of picking an intervention at random—which is a better analogue to what we're doing with animal interventions. That gives us a minimum estimate mean of $9.65/DALY and a maximum estimate of $64/DALY. Then the difference between the mean and the best intervention is only about 10x. This suggests that we should expect picking a factory farming intervention at random to have fairly high expected value, if we think animal interventions follow a similar distribution to the interventions in DCP2.
EDIT: I would expect on priors to find that animal interventions are at least 10x more effective than global poverty interventions in general*. If animal interventions do follow a distribution like the DCP2, that suggests that blindly picking an animal intervention should have similar or higher expected value than picking the best global poverty intervention.
*For a few related reasons, including: (1) we spend hundreds of billions of dollars on global poverty but less than $20 million on factory farming; (2) most people (implicitly) discount animals by several orders of magnitude more than they should; (3) there are >10 times more factory-farmed animals than there are humans in extreme poverty.
I take issue with the statement "it depends greatly on how much you value a human compared to a nonhuman animal". Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read "it depends greatly on how much we ought to value a human compared to a nonhuman".
Imagine if EAs went around saying "it depends on how much you value an African relative to an American". Maybe there is more reasonable uncertainty about between- as opposed to within-species comparisons, but still we demand good reasons for the value we assign to different kinds of humans. This idea is at the core of Effective Altruism. We ought to do the same with non-human sentients.
I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of "moral uncertainty" when it comes to doing interspecies comparisons, whereas there'd be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?).
For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I'm uncertain which one to take.
Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.
I'm not objecting to having moral uncertainty about animals. I'm objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say "It depends on how much you value them" rather than discussing how much we should value them.
I didn't intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals "is likely to be emotionally charged and counterproductive" - an attitude I think is widespread given how little I've seen this issue discussed - strikes me as another example of EAs' inconsistency when it comes to animals. No EA hesitates to debate, say, someone's preference for Christians over Muslims. So why are we afraid to debate preference among species?
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it's pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you're trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you'll just lose people.
I took it that the point by Jesse was about how one should frame these issues, not that one should assume a high parity of value between human and nonhuman animals or whatever. The idea is only that these value judgements are properly subject rational argument and should be framed as if they are.
An aside: meta-ethics entered the discussion a unhelpfully here and below. It can be true that one ought to value future generations/nonhuman animals a certain way on a number of anti-realist views (subjectivism, versions of non-cognitivism). Further, it's reasonable to hold that one can rationally argue over moral propositions, even if every moral proposition is false (error theory), in the same way that one can rationally argue over an aesthetic proposition, even if every aesthetic proposition is false. One can still appeal to reasons for seeing or believing a given way in either case. Of course, one will understand those reasons differently than the realist but the upshot is that the 'first-order' practice is left untouched. On the plausible moral anti-realist theories our first-order moral practices will remain largely untouched, in the same way, that on most normative anti-realist theories, concerning ideas like 'one ought to believe that x', 'one ought to do x', our relevant first-order practices will remain largely untouched.
People can discuss the reasons that they have certain moral or aesthetic preferences. They may even change their mind as a result of these discussions. But there's nothing irrational about holding a certain set of preferences, so I object to EAs saying that particular preferences are right or wrong, especially if there's significant disagreement.
Sure there can be. As trivial cases, people could have preferences which violate VNM axioms. But usually when we talk about morality we don't think that merely following the weakest kind of rationality is sufficient for a justified ethical system.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren't strict utilitarians.
EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
Metaethical claims don't have a strong hold on normative issues. We can rationally disagree as moral realists to the extent that we have reasons for or against various moral principles. Anti-realists can disagree to the same extent based on their own reasons for or against moral principles, but it's not obvious to me that they have any basis for rationally holding a range of moral principles which is wider than that which is available to the moral realist. At the very least, that's not how prominent anti-realist moral philosophers seem to think.
The realism vs anti-realism debate is about how to construe normative claims, not about which normative claims are justified or not. Taking the side of anti-realism doesn't provide a ticket to select values arbitrarily or based on personal appeal.
There's no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible. "Justified" and similar words are value judgments, and anti-realism doesn't accept the existence of objective value. Like, when you say "doesn't provide a ticket", that implies requiring permission. Permission from whom or what?
Anti-realism is such an uncomfortable philosophy that people are often unwilling or unable to accept its full implications.
Moral particularism isn't explicitly anti-realist, but very compatible: https://en.wikipedia.org/wiki/Moral_particularism
There is certainly enough basis in anti-realism for saying that moral systems are unjustified. I'm not sure what you mean by "objectively" unjustified - certainly, anti-realists can't claim that certain moral systems are true while others are false. But that doesn't imply that ethics are arbitrary. The right moral system, according to anti-realists, could be one that fits our properly construed intuitions; one that is supported by empirical evidence; one that is grounded in basic tenets of rationality; or any other metric - just like moral realists say.
Certainly it's possible for the anti-realist to claim "it is morally right to torture babies, because I feel like it," just like it's also possible for the realist to claim the same thing. And both of them will (probably) be making claims that don't make a whole lot of sense and are easy to attack.
And certainly there are plenty of anti-realists who make claims along the lines of "I believe X morality because I am selfish and it's the values I find appealing," or something of the sort; but that's simply bad philosophy which lacks justification. Actual anti-realist philosophers don't think that way.
It means that the anti-realist is missing out on some key element or intention which they are trying to include in their moral claims. They probably intend that their moral system is compatible with human intuitions, or that it is grounded in rationality, or whatever it is that they think provides the basis for morality (just like moral realists have similar metrics). And when the anti-realist makes such a moral claim, we point out: hey, your moral claims don't make sense, what reasons do you have to follow them?
Obviously the anti-realist could say that they don't care if their morality is rational, or justified, or intuitive, or whatever it is they have as the basis for morality. But the moral realist can do a similar thing: I could say that I simply don't care if my moral principles are correct or not. In both cases, there's no way to literally force them to change their beliefs, but you have exposed them for possessing faulty reasoning.
Here's some Reddit threads that might explain it better than I can (I agree with some of the commenters that ethics and metaethics are not completely separate; however I still don't think that ethics under anti-realism is as different as you say it is):
https://www.reddit.com/r/askphilosophy/comments/3qh90s/whats_the_relationship_between_metaethics_and/
https://www.reddit.com/r/askphilosophy/comments/3fu710/how_is_it_possible_for_the_ethical_theory_to/
https://www.reddit.com/r/askphilosophy/comments/356g4r/can_an_ethical_noncognitivist_accept_any/
Okay, so maybe it's not distinct to anti-realism. But that only strengthens my claim that there's nothing irrational about having different values.
You keep trying to have it both ways. You say "anti-realists can't claim that certain moral systems are true while others are false." But then you substitute other words that suggest either empirical or normative claims:
"right"
"justified"
"easy to attack"
"doesn't make sense"
"proper"
"rational"
"faulty reasoning"
"bad"
Even "intuitive" is subjective. Many cultures have "intuitive" values that we'd find reprehensible. (Getting back to the original claim, many people intuitively value humans much more than other animals.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are "irrational" or "illogical". It's "unjustified", as you'd say, and bad for outreach, especially when the values are controversial.
Again, antirealists can make normative claims just like anyone else. The difference is in how these claims are handled and interpreted. Antirealists just think that truth and falsity are the wrong sort of thing to be looking for when it comes to normative claims.
(And it goes without saying that anyone can make empirical claims.)
No, I think there are plenty of beliefs and values where we are justified in calling them irrational or illogical. Specifically, there are beliefs and values where the people holding them have poor reasons for doing so, and there are beliefs and values which are harmful in society, and there are a great deal which are in both those groups.
Maybe. Or maybe it's important to prevent these ideas from gaining traction. Maybe having a clearly-defined out-group is helpful for the solidarity and strength of the in-group.