In theory, effective altruists are committed to using reason and evidence to identify the best interventions. In practice, much of the available funding is controlled by a small number of actors including prominent donors – most recently, Sam Bankman-Fried, and now Cari Tuna and Dustin Moskovitz. What these donors consider worth funding has a sizable influence on what actually gets funded.

Today’s post uses historical comparisons to the Christianization of Roman philanthropy as well as Gilded Age philanthropy in the United States to begin to think critically about the discretion afforded to wealthy donors in shaping philanthropic priorities. In particular, I suggest, philanthropists exhibit important conservative biases that may explain some of effective altruism’s muted reaction towards institutional critiques of effective altruism. And more broadly, philanthropists tend to favor many of the same views and practices that brought them success in industries which differ importantly from the areas to which they turn their philanthropic focus. It is not obvious that this tendency to project methods from one domain onto another is a healthy feature of philanthropy.

There is much more to be said about the role of donor discretion in philanthropy. The rest I will save for the next post in this series.

35

0
0
1

Reactions

0
0
1
Comments17


Sorted by Click to highlight new comments since:

Thorstad writes:

I think that the difficulty which philanthropists have in critiquing the systems that create and sustain them may explain much of the difficulty in conversations around what is often called the institutional critique of effective altruism.

The main difficulty I have with these "conversations" is that I haven't actually seen a substantive critique, containing anything recognizable as an argument. Critics don't say: "We should institute systemic policies X, Y, Z, and here's the supporting evidence why." Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesn't share their politics (for which, recall, we have been given no argument whatsoever) must be in need of psychologizing.

So consider that as an alternative hypothesis: the dialectic around the "institutional critique" is "difficult" (unproductive?) because it consists in critics psychologizing EAs rather than trying to persuade us with arguments.

Although effective altruists did engage in detail with the institutional critique, much of the response was decidedly unsympathetic. It is worth considering if the social and financial position of effective altruists might have something to do with this reaction – not because effective altruists are greedy (they are not), but because most of us find it hard to think ill of the institutions that raised us up.

This exemplifies the sort of engagement that I find unproductive. Rather than psychologizing those he disagrees with, I would much prefer to see Thorstad attempt to offer a persuasive first-order argument for some specific alternative cause prioritization (that diverges from the EA conventional wisdom). I think that would obviously be far more "worth considering" than convenient psychological stories that function to justify dismissing different perspectives than his own.

I think the latter is outright bad and detracts from reasoned discourse.

In fairness, you could consistently think "billionaires are biased against intervention which are justified via premises that make "the system"/billionaires sound bad" without believing we should abolish capitalism. The critique could also be pointing to a real problem, and maybe on that could be mitigated in various way,  even if "abolish the system" is not a good idea. (Not a comment either way on whether your criticism of the versions of the institutional critique that have actually been made is correct.) 

That's certainly possible! I just find it incredibly frustrating that these criticisms are always written in a way that fails to acknowledge that some of us might just genuinely disagree with the critics' preferred politics, and that we could have reasonable and principled grounds for doing so, which are worth engaging with.

As a methodological principle, I think one should argue the first-order issues before accusing one's interlocutors of bias. Fans of the institutional critique too often skip that crucial first step.

A kinder concept than bias would be conflict of interest. In the broader society, we normally don't expect a critic to prove actual biased decision-making to score a point; identifying a meaningful conflict of interest is enough. And it's not generally considered "psychologizing those [one] disagrees with" to point to a possible COI, even if the identification is mediated by assumptions about the person's internal mental functions.

Such a norm would make intellectual progress impossible. We'd just spend all day accusing each other of vague COIs. (E.g.: "Thorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. It's worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.")

There's a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.

I don't think academic philosophy is the right frame of reference here.

We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the player's moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!

I'm not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/OP/GV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack information -- such as a full investigation into various concerns that have been raised -- to fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency. 

To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. There's a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I don't think it is appropriate to then apply those standards -- which are premised on the ready availability of information and very high reasoning transparency -- to the critics of billionaire philanthropists.

In the end, I don't find the basic argument for a significant COI against "anti-capitalist" interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I don't think that's because I am anti-capitalist -- I would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).

~~~~

As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will / won't fund on what gets proposed and what gets done seems obvious. There's on-Forum evidence that being too far away from GV's political views (i.e., being "right-coded") is seen as a liability. So that doesn't seem like psychologizing or a proposition that needs much support.

  1. ^

    I set aside the question of whether someone is throwing matches or otherwise colluding.

One quick reason for thinking that academic philosophy norms should apply to the "institutional critique" is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as they're writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, I'm claiming that they haven't done a competent job of arguing for their thesis.

Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesn't share their politics (for which, recall, we have been given no argument whatsoever)  [ . . . .]

 

I don't think EAs are Thorstad's primary intended audience here. To the extent that most of that audience thinks what you characterize as "a broadly anti-capitalist leftism" is correct, or at least is aware of the arguments that are advanced in favor of that position, it isn't necessarily a good use of either his time or reader time to reinvent the wheel. This is roughly similar to how most posts here generally assume the core ideas associated with EAs and are not likely to move the needle with people who are either not informed of or are unpersuaded by the same. I'm guessing he would write differently if writing specifically to an EA audience.

More broadly, one could argue that the flipside of the aphorism that extraordinary claims require extraordinary evidence is that one only needs to put on (at most) a minimal case to refute an extraordinary claim unless and until serious evidence has been marshalled in its favor. It's plausible to think -- for instance -- that "it is right and proper for billionaires (and their agents) to have so much influence and discretion over philanthropy" or "it is right and proper for Dustin and Cari, and their agents, to have so much influence and discretion over EA" are indeed extraordinary claims, and I haven't seen what I would characterize as serious evidence in support of them. Relatedly, capitalism doesn't have a better claim to being the default starting point than does anti-capitalism.

I think you've misunderstood me. My complaint is not that these philosophers openly argue, "EAs are insufficiently Left, so be suspicious of them." (That's not what they say.) Rather, they presuppose Leftism's obviousness in a different way. They seem unaware that market liberals sincerely disagree with them about what's likely to have good results.

This leads them to engage in fallacious reasoning, like "EAs must be methodologically biased against systemic change, because why else would they not support anti-capitalist revolution?" I have literally never seen any proponent of the institutional critique acknowledge that some of us genuinely believe, for reasons, that anti-capitalist revolution is a bad idea. There is zero grappling with the possibility of disagreement about which "systemic changes" are good or bad. It's really bizarre. And I should stress that I'm not criticizing their politics here. I'm criticizing their reasoning. Their "evidence" of methodological bias is that we don't embrace their politics. That's terrible reasoning! 

I don't think I'm methodologically biased against systemic change, and nothing I've read in these critiques gives me any reason to reconsider that judgment. It's weird to present as an "objection" something that gives one's target no reason to reconsider their view. That's not how philosophy normally works!

Now, you could develop some sort of argument about which claims are or are not "extraordinary", and whether the historical success of capitalism relative to anti-capitalism really makes no difference to what we should treat as "the default starting point." Those could be interesting arguments (if you anticipated and addressed the obvious objections)! I'm skeptical that they'd succeed, but I'd appreciate the intellectual engagement, and the possibility of learning something from it. Existing proponents of the institutional critique have not done any of that work (from what I've read to date). And they're philosophers -- it's their job to make reasoned arguments that engage with the perspectives of those they disagree with.

I'm not sure any of these except maybe the second actually answer the complaints Richard is making. 

The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at "systemic change" with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether). 
The third post is mostly not about the institutional critique at all, and the main thing it does say about it is just that longtermists can't respond to it by saying they only back interventions that pass rigorous GiveWell-style cost benefit analysis. Which is true enough, but does zero to motivate the idea that there are good interventions aimed at institutional change available. Thorstad does also say "well, haven't anti-oppression mass movements done a whole lot of good in the past; isn't a bit suspicious to think they've suddenly stopped doing so". Which is a good point in itself, but fairly abstract and doesn't actually do much to help anyone identify what reforms they should be funding. 

The fourth post is extraordinarily abstract: the point seems to be that a) we should pay more attention to injustice, and b) people often use abstract language about what is rational to justify injustice against oppressed groups. Again, this is not very actionable, and Thorstad's post does not really mention Crary's arguments for either of these claims. 

I think this goes some way to vindicating Richard's complaint that not enough specific details are given in these sort of critiques, rather than undermining it actually (though only a little, these are short reviews, and may not do the stuff being reviewed justice.) 



 

I think this point is extremely revealing:

The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at "systemic change" with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether).

See also Crary et al.'s lament that EA funders prioritize transformative alt-meat research and corporate campaigns over sanctuaries for individual rescued animals. They are clearly not principled advocates for systemic change over piecemeal interventions. Rather, I take these examples to show that their criticisms are entirely opportunistic. (As I previously argued on my blog, the best available evidence -- especially taking into account their self-reported motivation for writing the anti-EA book -- suggests that these authors want funding for their friends and political allies, and don't want it to have to pass any kind of evaluation for cost-effectiveness relative to competing uses of the available funds. It's all quite transparent, and I don't understand why people insist on pretending that these hacks have intellectual merit.)

To be clear, Thorstadt has written around a hundred different articles critiquing EA positions in depth, including significant amounts of object level criticism

I find it quite irritating that no matter how much in depth object level criticism people like Thorstadt or I make, if we dare to mention meta-level problems at all we often get treated like rabid social justice vigilantes. This is just mud-slinging: both meta level and object level issues are important for the epistemological health of the movement. 

How does writing a substantive post on x-risk give Thorstad a free pass to cast aspersions when he turns to discussing politics or economics?

I'm criticizing specific content here. I don't know who you are or what your grievances are, and I'd ask you not to project them onto my specific criticisms of Thorstad and Crary et al.

Thorstad acknowledged that many of us have engaged in depth with the critique he references, but instead of treating our responses as worth considering, he suggests it is "worth considering if the social and financial position of effective altruists might have something to do with" the conclusions we reach.

It is hardly "mud-slinging" for me to find this slimy dismissal objectionable. Nor is it mud-slinging to point out ways in which Crary et al (cited approvingly by Thorstad) are clearly being unprincipled in their appeals to "systemic change". This is specific, textually-grounded criticism of specific actors, none of whom are you.

I think Thorstad has written very good stuff-for example on the way in which arguments for small reductions in extinction risk. More politically, his reporting on Scott Alexander and some other figures connected to the community's racism is a useful public service and he has every right to be pissed off {EDIT: sentence originally ended here: I meant to say he has every right to be pissed of at people ignore or disparaging the racism stuff]. I don't even necessarily entirely disagree with the meta-level critique being offered here.

But it was still striking to me that someone responded to the complaint that people making the institutional critique tend not to actually have much in the way of actionable information, and to take a "let me explain why these people came to their obviously wrong views" tone, by posting a bunch of stuff that was mostly like that.

If my tone is sharp it's also because, like Richard I find the easy, unthinking combination of "the problem with these people is that they don't care about changing the system" with "why are they doing meat alternatives and not vegan outreach aimed at a particular ethnic group that makes up <20% of the population or animal shelters" to be genuinely enragingly hypocritical and unserious. That's actually somewhat separate from whether EAs are insufficiently sympathetic to anticapitalist or "social justice"-coded.

Incidentally, while I agree with Jason that it's "Moskowitz and Tuna ought to be able to personally decide where nearly all the money in the movement is spent" that is the weird claim that needs defending, my guess is that at least one practical effect of this has been to pull the movement left, not right, on several issues. Open Phil spent money on anti- mass incarceration stuff, and vaguely left-coded macroeconomic policy stuff at a time when the community was not particularly interested in either of those things. Indeed I remember Thorstad singling out critiques of the criminal justice stuff as examples of the community holding left-coded stuff to a higher standard of proof. More recently you must have seen the rationalist complaints on the forum about how Open Phil won't fund anything "right-coded". None of that's to say there are no problems in principle with unaccountable billionares of course. After all, our other major billionaire donor was SBF! (Though his politics wasn't really the issue.)

Those are meta-level epistemological/methodological critiques for the most part, but meta-level epistemological/methodological critiques can still be substantive critiques and not reducible to mere psychologization of adversaries.

Yeah, I suppose that is fair.

Curated and popular this week
 ·  · 1m read
 · 
LewisBollard
 ·  · 5m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It’s easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and your fellow advocates and funders played in these wins. I’m inspired by what you’ve achieved. I hope you will be too. 1. About Cluckin’ Time. Over 1,000 companies globally have now fulfilled their pledges to go cage-free. McDonald’s implemented its pledge in the US and Canada two years ahead of schedule, sparing seven million hens from cages. Subway implemented its pledge in Europe, the Middle East, Oceania, and Indonesia. Yum Brands, owner of KFC and Pizza Hut, reported that for 25,000 of its restaurants it is now 90% cage-free. These are not cheap changes: one UK retailer, Lidl, recently invested £1 billion just to transition part of its egg supply chain to free-range. 2. The Egg-sodus: Cracking Open Cages. In five of Europe’s seven biggest egg markets — France, Germany, Italy, the Netherlands, and the UK — at least two-thirds of hens are now cage-free. In the US, about 40% of hens are — up from a mere 6% a decade ago. In Brazil, where large-scale cage-free production didn’t exist a decade ago, about 15% of hens are now cage-free. And in Japan, where it still barely exists, the nation’s largest egg buyer, Kewpi
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus
Recent opportunities in Building effective altruism