In theory, effective altruists are committed to using reason and evidence to identify the best interventions. In practice, much of the available funding is controlled by a small number of actors including prominent donors – most recently, Sam Bankman-Fried, and now Cari Tuna and Dustin Moskovitz. What these donors consider worth funding has a sizable influence on what actually gets funded.

Today’s post uses historical comparisons to the Christianization of Roman philanthropy as well as Gilded Age philanthropy in the United States to begin to think critically about the discretion afforded to wealthy donors in shaping philanthropic priorities. In particular, I suggest, philanthropists exhibit important conservative biases that may explain some of effective altruism’s muted reaction towards institutional critiques of effective altruism. And more broadly, philanthropists tend to favor many of the same views and practices that brought them success in industries which differ importantly from the areas to which they turn their philanthropic focus. It is not obvious that this tendency to project methods from one domain onto another is a healthy feature of philanthropy.

There is much more to be said about the role of donor discretion in philanthropy. The rest I will save for the next post in this series.

37

0
0
1

Reactions

0
0
1
Comments17


Sorted by Click to highlight new comments since:

Thorstad writes:

I think that the difficulty which philanthropists have in critiquing the systems that create and sustain them may explain much of the difficulty in conversations around what is often called the institutional critique of effective altruism.

The main difficulty I have with these "conversations" is that I haven't actually seen a substantive critique, containing anything recognizable as an argument. Critics don't say: "We should institute systemic policies X, Y, Z, and here's the supporting evidence why." Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesn't share their politics (for which, recall, we have been given no argument whatsoever) must be in need of psychologizing.

So consider that as an alternative hypothesis: the dialectic around the "institutional critique" is "difficult" (unproductive?) because it consists in critics psychologizing EAs rather than trying to persuade us with arguments.

Although effective altruists did engage in detail with the institutional critique, much of the response was decidedly unsympathetic. It is worth considering if the social and financial position of effective altruists might have something to do with this reaction – not because effective altruists are greedy (they are not), but because most of us find it hard to think ill of the institutions that raised us up.

This exemplifies the sort of engagement that I find unproductive. Rather than psychologizing those he disagrees with, I would much prefer to see Thorstad attempt to offer a persuasive first-order argument for some specific alternative cause prioritization (that diverges from the EA conventional wisdom). I think that would obviously be far more "worth considering" than convenient psychological stories that function to justify dismissing different perspectives than his own.

I think the latter is outright bad and detracts from reasoned discourse.

In fairness, you could consistently think "billionaires are biased against intervention which are justified via premises that make "the system"/billionaires sound bad" without believing we should abolish capitalism. The critique could also be pointing to a real problem, and maybe on that could be mitigated in various way,  even if "abolish the system" is not a good idea. (Not a comment either way on whether your criticism of the versions of the institutional critique that have actually been made is correct.) 

That's certainly possible! I just find it incredibly frustrating that these criticisms are always written in a way that fails to acknowledge that some of us might just genuinely disagree with the critics' preferred politics, and that we could have reasonable and principled grounds for doing so, which are worth engaging with.

As a methodological principle, I think one should argue the first-order issues before accusing one's interlocutors of bias. Fans of the institutional critique too often skip that crucial first step.

A kinder concept than bias would be conflict of interest. In the broader society, we normally don't expect a critic to prove actual biased decision-making to score a point; identifying a meaningful conflict of interest is enough. And it's not generally considered "psychologizing those [one] disagrees with" to point to a possible COI, even if the identification is mediated by assumptions about the person's internal mental functions.

Such a norm would make intellectual progress impossible. We'd just spend all day accusing each other of vague COIs. (E.g.: "Thorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. It's worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.")

There's a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.

I don't think academic philosophy is the right frame of reference here.

We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the player's moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!

I'm not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/OP/GV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack information -- such as a full investigation into various concerns that have been raised -- to fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency. 

To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. There's a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I don't think it is appropriate to then apply those standards -- which are premised on the ready availability of information and very high reasoning transparency -- to the critics of billionaire philanthropists.

In the end, I don't find the basic argument for a significant COI against "anti-capitalist" interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I don't think that's because I am anti-capitalist -- I would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).

~~~~

As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will / won't fund on what gets proposed and what gets done seems obvious. There's on-Forum evidence that being too far away from GV's political views (i.e., being "right-coded") is seen as a liability. So that doesn't seem like psychologizing or a proposition that needs much support.

  1. ^

    I set aside the question of whether someone is throwing matches or otherwise colluding.

One quick reason for thinking that academic philosophy norms should apply to the "institutional critique" is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as they're writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, I'm claiming that they haven't done a competent job of arguing for their thesis.

Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesn't share their politics (for which, recall, we have been given no argument whatsoever)  [ . . . .]

 

I don't think EAs are Thorstad's primary intended audience here. To the extent that most of that audience thinks what you characterize as "a broadly anti-capitalist leftism" is correct, or at least is aware of the arguments that are advanced in favor of that position, it isn't necessarily a good use of either his time or reader time to reinvent the wheel. This is roughly similar to how most posts here generally assume the core ideas associated with EAs and are not likely to move the needle with people who are either not informed of or are unpersuaded by the same. I'm guessing he would write differently if writing specifically to an EA audience.

More broadly, one could argue that the flipside of the aphorism that extraordinary claims require extraordinary evidence is that one only needs to put on (at most) a minimal case to refute an extraordinary claim unless and until serious evidence has been marshalled in its favor. It's plausible to think -- for instance -- that "it is right and proper for billionaires (and their agents) to have so much influence and discretion over philanthropy" or "it is right and proper for Dustin and Cari, and their agents, to have so much influence and discretion over EA" are indeed extraordinary claims, and I haven't seen what I would characterize as serious evidence in support of them. Relatedly, capitalism doesn't have a better claim to being the default starting point than does anti-capitalism.

I think you've misunderstood me. My complaint is not that these philosophers openly argue, "EAs are insufficiently Left, so be suspicious of them." (That's not what they say.) Rather, they presuppose Leftism's obviousness in a different way. They seem unaware that market liberals sincerely disagree with them about what's likely to have good results.

This leads them to engage in fallacious reasoning, like "EAs must be methodologically biased against systemic change, because why else would they not support anti-capitalist revolution?" I have literally never seen any proponent of the institutional critique acknowledge that some of us genuinely believe, for reasons, that anti-capitalist revolution is a bad idea. There is zero grappling with the possibility of disagreement about which "systemic changes" are good or bad. It's really bizarre. And I should stress that I'm not criticizing their politics here. I'm criticizing their reasoning. Their "evidence" of methodological bias is that we don't embrace their politics. That's terrible reasoning! 

I don't think I'm methodologically biased against systemic change, and nothing I've read in these critiques gives me any reason to reconsider that judgment. It's weird to present as an "objection" something that gives one's target no reason to reconsider their view. That's not how philosophy normally works!

Now, you could develop some sort of argument about which claims are or are not "extraordinary", and whether the historical success of capitalism relative to anti-capitalism really makes no difference to what we should treat as "the default starting point." Those could be interesting arguments (if you anticipated and addressed the obvious objections)! I'm skeptical that they'd succeed, but I'd appreciate the intellectual engagement, and the possibility of learning something from it. Existing proponents of the institutional critique have not done any of that work (from what I've read to date). And they're philosophers -- it's their job to make reasoned arguments that engage with the perspectives of those they disagree with.

I'm not sure any of these except maybe the second actually answer the complaints Richard is making. 

The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at "systemic change" with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether). 
The third post is mostly not about the institutional critique at all, and the main thing it does say about it is just that longtermists can't respond to it by saying they only back interventions that pass rigorous GiveWell-style cost benefit analysis. Which is true enough, but does zero to motivate the idea that there are good interventions aimed at institutional change available. Thorstad does also say "well, haven't anti-oppression mass movements done a whole lot of good in the past; isn't a bit suspicious to think they've suddenly stopped doing so". Which is a good point in itself, but fairly abstract and doesn't actually do much to help anyone identify what reforms they should be funding. 

The fourth post is extraordinarily abstract: the point seems to be that a) we should pay more attention to injustice, and b) people often use abstract language about what is rational to justify injustice against oppressed groups. Again, this is not very actionable, and Thorstad's post does not really mention Crary's arguments for either of these claims. 

I think this goes some way to vindicating Richard's complaint that not enough specific details are given in these sort of critiques, rather than undermining it actually (though only a little, these are short reviews, and may not do the stuff being reviewed justice.) 



 

I think this point is extremely revealing:

The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at "systemic change" with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether).

See also Crary et al.'s lament that EA funders prioritize transformative alt-meat research and corporate campaigns over sanctuaries for individual rescued animals. They are clearly not principled advocates for systemic change over piecemeal interventions. Rather, I take these examples to show that their criticisms are entirely opportunistic. (As I previously argued on my blog, the best available evidence -- especially taking into account their self-reported motivation for writing the anti-EA book -- suggests that these authors want funding for their friends and political allies, and don't want it to have to pass any kind of evaluation for cost-effectiveness relative to competing uses of the available funds. It's all quite transparent, and I don't understand why people insist on pretending that these hacks have intellectual merit.)

To be clear, Thorstadt has written around a hundred different articles critiquing EA positions in depth, including significant amounts of object level criticism

I find it quite irritating that no matter how much in depth object level criticism people like Thorstadt or I make, if we dare to mention meta-level problems at all we often get treated like rabid social justice vigilantes. This is just mud-slinging: both meta level and object level issues are important for the epistemological health of the movement. 

How does writing a substantive post on x-risk give Thorstad a free pass to cast aspersions when he turns to discussing politics or economics?

I'm criticizing specific content here. I don't know who you are or what your grievances are, and I'd ask you not to project them onto my specific criticisms of Thorstad and Crary et al.

Thorstad acknowledged that many of us have engaged in depth with the critique he references, but instead of treating our responses as worth considering, he suggests it is "worth considering if the social and financial position of effective altruists might have something to do with" the conclusions we reach.

It is hardly "mud-slinging" for me to find this slimy dismissal objectionable. Nor is it mud-slinging to point out ways in which Crary et al (cited approvingly by Thorstad) are clearly being unprincipled in their appeals to "systemic change". This is specific, textually-grounded criticism of specific actors, none of whom are you.

I think Thorstad has written very good stuff-for example on the way in which arguments for small reductions in extinction risk. More politically, his reporting on Scott Alexander and some other figures connected to the community's racism is a useful public service and he has every right to be pissed off {EDIT: sentence originally ended here: I meant to say he has every right to be pissed of at people ignore or disparaging the racism stuff]. I don't even necessarily entirely disagree with the meta-level critique being offered here.

But it was still striking to me that someone responded to the complaint that people making the institutional critique tend not to actually have much in the way of actionable information, and to take a "let me explain why these people came to their obviously wrong views" tone, by posting a bunch of stuff that was mostly like that.

If my tone is sharp it's also because, like Richard I find the easy, unthinking combination of "the problem with these people is that they don't care about changing the system" with "why are they doing meat alternatives and not vegan outreach aimed at a particular ethnic group that makes up <20% of the population or animal shelters" to be genuinely enragingly hypocritical and unserious. That's actually somewhat separate from whether EAs are insufficiently sympathetic to anticapitalist or "social justice"-coded.

Incidentally, while I agree with Jason that it's "Moskowitz and Tuna ought to be able to personally decide where nearly all the money in the movement is spent" that is the weird claim that needs defending, my guess is that at least one practical effect of this has been to pull the movement left, not right, on several issues. Open Phil spent money on anti- mass incarceration stuff, and vaguely left-coded macroeconomic policy stuff at a time when the community was not particularly interested in either of those things. Indeed I remember Thorstad singling out critiques of the criminal justice stuff as examples of the community holding left-coded stuff to a higher standard of proof. More recently you must have seen the rationalist complaints on the forum about how Open Phil won't fund anything "right-coded". None of that's to say there are no problems in principle with unaccountable billionares of course. After all, our other major billionaire donor was SBF! (Though his politics wasn't really the issue.)

Those are meta-level epistemological/methodological critiques for the most part, but meta-level epistemological/methodological critiques can still be substantive critiques and not reducible to mere psychologization of adversaries.

Yeah, I suppose that is fair.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would