Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
In the long term, we will hopefully invent forms of delicious meat like cultured meat that do not involve sentient animal suffering... When that happens, pro-natalism might make more sense.
As Kevin Kuruc argues, progress happens from people (or productive person-years), not from the bare passage of time. So we should expect there's some number of productive person-years required to solve this problem. So there simply is no meat-eater problem. As a first-pass model: removing person-years from the present doesn't reduce the number of animals harmed before a solution is found; it just makes the solution arrive later.
One quick reason for thinking that academic philosophy norms should apply to the "institutional critique" is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as they're writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, I'm claiming that they haven't done a competent job of arguing for their thesis.
Such a norm would make intellectual progress impossible. We'd just spend all day accusing each other of vague COIs. (E.g.: "Thorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. It's worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.")
There's a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I think you've misunderstood me. My complaint is not that these philosophers openly argue, "EAs are insufficiently Left, so be suspicious of them." (That's not what they say.) Rather, they presuppose Leftism's obviousness in a different way. They seem unaware that market liberals sincerely disagree with them about what's likely to have good results.
This leads them to engage in fallacious reasoning, like "EAs must be methodologically biased against systemic change, because why else would they not support anti-capitalist revolution?" I have literally never seen any proponent of the institutional critique acknowledge that some of us genuinely believe, for reasons, that anti-capitalist revolution is a bad idea. There is zero grappling with the possibility of disagreement about which "systemic changes" are good or bad. It's really bizarre. And I should stress that I'm not criticizing their politics here. I'm criticizing their reasoning. Their "evidence" of methodological bias is that we don't embrace their politics. That's terrible reasoning!
I don't think I'm methodologically biased against systemic change, and nothing I've read in these critiques gives me any reason to reconsider that judgment. It's weird to present as an "objection" something that gives one's target no reason to reconsider their view. That's not how philosophy normally works!
Now, you could develop some sort of argument about which claims are or are not "extraordinary", and whether the historical success of capitalism relative to anti-capitalism really makes no difference to what we should treat as "the default starting point." Those could be interesting arguments (if you anticipated and addressed the obvious objections)! I'm skeptical that they'd succeed, but I'd appreciate the intellectual engagement, and the possibility of learning something from it. Existing proponents of the institutional critique have not done any of that work (from what I've read to date). And they're philosophers -- it's their job to make reasoned arguments that engage with the perspectives of those they disagree with.
How does writing a substantive post on x-risk give Thorstad a free pass to cast aspersions when he turns to discussing politics or economics?
I'm criticizing specific content here. I don't know who you are or what your grievances are, and I'd ask you not to project them onto my specific criticisms of Thorstad and Crary et al.
Thorstad acknowledged that many of us have engaged in depth with the critique he references, but instead of treating our responses as worth considering, he suggests it is "worth considering if the social and financial position of effective altruists might have something to do with" the conclusions we reach.
It is hardly "mud-slinging" for me to find this slimy dismissal objectionable. Nor is it mud-slinging to point out ways in which Crary et al (cited approvingly by Thorstad) are clearly being unprincipled in their appeals to "systemic change". This is specific, textually-grounded criticism of specific actors, none of whom are you.
I think this point is extremely revealing:
The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at "systemic change" with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether).
See also Crary et al.'s lament that EA funders prioritize transformative alt-meat research and corporate campaigns over sanctuaries for individual rescued animals. They are clearly not principled advocates for systemic change over piecemeal interventions. Rather, I take these examples to show that their criticisms are entirely opportunistic. (As I previously argued on my blog, the best available evidence -- especially taking into account their self-reported motivation for writing the anti-EA book -- suggests that these authors want funding for their friends and political allies, and don't want it to have to pass any kind of evaluation for cost-effectiveness relative to competing uses of the available funds. It's all quite transparent, and I don't understand why people insist on pretending that these hacks have intellectual merit.)
That's certainly possible! I just find it incredibly frustrating that these criticisms are always written in a way that fails to acknowledge that some of us might just genuinely disagree with the critics' preferred politics, and that we could have reasonable and principled grounds for doing so, which are worth engaging with.
As a methodological principle, I think one should argue the first-order issues before accusing one's interlocutors of bias. Fans of the institutional critique too often skip that crucial first step.
Thorstad writes:
I think that the difficulty which philanthropists have in critiquing the systems that create and sustain them may explain much of the difficulty in conversations around what is often called the institutional critique of effective altruism.
The main difficulty I have with these "conversations" is that I haven't actually seen a substantive critique, containing anything recognizable as an argument. Critics don't say: "We should institute systemic policies X, Y, Z, and here's the supporting evidence why." Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesn't share their politics (for which, recall, we have been given no argument whatsoever) must be in need of psychologizing.
So consider that as an alternative hypothesis: the dialectic around the "institutional critique" is "difficult" (unproductive?) because it consists in critics psychologizing EAs rather than trying to persuade us with arguments.
Although effective altruists did engage in detail with the institutional critique, much of the response was decidedly unsympathetic. It is worth considering if the social and financial position of effective altruists might have something to do with this reaction â not because effective altruists are greedy (they are not), but because most of us find it hard to think ill of the institutions that raised us up.
This exemplifies the sort of engagement that I find unproductive. Rather than psychologizing those he disagrees with, I would much prefer to see Thorstad attempt to offer a persuasive first-order argument for some specific alternative cause prioritization (that diverges from the EA conventional wisdom). I think that would obviously be far more "worth considering" than convenient psychological stories that function to justify dismissing different perspectives than his own.
I think the latter is outright bad and detracts from reasoned discourse.
Thanks for the feedback! It's probably helpful to read this in conjunction with 'Good Judgment with Numbers', because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.
(A general issue I often find here is that when I'm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesn't refute this - very different - 'steelman' position that they have in mind. But I'm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it! I'm arguing against the specific position specified in the post, i.e. holding that different kinds of values can't -- literally, can't, like, in principle -- be quantified.)
I think this is confusing two forms of 'extreme'.
I'm actually trying to suggest that my interlocutor has confused these two things. There's what's conventional vs socially extreme, and there's what's epistemically extreme, and they aren't the same thing. That's my whole point in that paragraph. It isn't necessarily epistemically safe to do what's socially safe or conventional.
Or if any other kind of progress (including moral progress, some of which will come from future people) will eventually abolish factory-farming. I'd be utterly shocked if factory-farming is still a thing 1000+ years from now. But sure, it is a possibility, so you could discount the value of new lives by some modest amount to reflect this risk. I just don't think that will yield the result that marginal population increases are net-negative for the world in expectation.