982Joined Jan 2023


Sorted by New


Thanks for sharing—this seems like a good strategy. I'm curious what people said when you asked whether they had heard of EA; like, what percent had, and of those, what percent had a positive/neutral/negative impression?

I agree with what others have said re: pedestal, so am not going to produce more quotes or anecdotes. I stand by the claim, though. 

I think people may have been inclined to put SBF on a pedestal because earning to give was the main thing people criticized about early EA. People were otherwise pretty supportive of early EA ideas; I mean, it's hard not to support finding more cost-effective global health charities. When SBF emerged, I think this was a bit of a "see, we told you so" moment for EAs who had been around for a long time, especially because SBF had explicitly chosen to earn to give because of EA. So it wasn't just: "look this guy is earning to give and has billions of dollars!" The subtext was also: "EA is really onto something with its thinking and advice." He became a poster boy for the idea that we can actually intellectualize our way to making the world better (so fuck the haters).

I think a more plausible defense of senior EAs is not that this pedestal thing didn't happen, but that (as @Stefan_Schubert suggests) it may not have made that much of a difference. EAs might well have rallied around SBF even if senior people hadn't promoted him. And this is definitely possible, but I wonder if things would've played out pretty differently if senior EAs had been like "look, we'll take your money, and we'll recommend some great people to work for you, but we don't want to personally serve on the board of FTX Foundation/vouch for you/have you on our podcast, etc because we have heard rumors about X, Y, and Z, and think they pose reputational risks to EA."

Lastly: it looks like the three former Alameda employees accused SBF of having "inappropriate sexual relationships with subordinates" around the beginning of the #MeToo movement. Alameda launched in the fall of 2017 and the confrontation with Sam occurred in April of 2018. The NYT published its article about Harvey Weinstein on October 5th, 2017, and dozens of men were accused of harassment between then and February 2018. The fact that SBF's alleged inappropriate sexual behavior occurred around the height of the #MeToo movement doesn't make me think  EA leaders had less of a reason to worry about the reputational risks of promoting him.

I am also eager to see what the investigation concludes, but I'm pretty convinced at this point that EA leaders made big mistakes.

It's not obvious to me (yet) that they should've known not to take Sam's money—non-profits accept donations from dubious characters all the time. Even if EA leaders thought Sam was sketchy (which it appears some did), it's not clear to me they should've known Sam was don't-take-money-from-this-person bad. This is a line non-profits walk all the time, and many have erred on the side of taking money from people they shouldn't have taken money from.

But I cannot wrap my head around why—knowing what it appears they knew then—anyone thought it was a good idea to put this guy on a pedestal; to elevate him as a moral paragon and someone to emulate; to tie EA's reputation so closely to his. It really feels like they should've (at least) known not to do that.

Answer by lillyMar 12, 2023302

I don't know a lot about this bill specifically, but here's my sense:

This bill has been pushed by disability activists, who are opposed to things like QALYs, which they consider ableist. Steve Pearson nicely summarizes why here:

Since the early days of CEA experts recognized that any extension of life for patients with a persistent disability would be “weighted” in the QALY by the (lower) quality of life assigned to that health state. For example, a treatment that extends life — but does not improve quality of life — for patients with a condition that requires mechanical ventilation would be assigned a lower QALY gain that a treatment that extends life exactly the same amount for patients with rheumatoid arthritis or cancer.

This bill currently has no democratic co-sponsors in the House (although this article says that there is "bipartisan interest"), and I do not think it has been introduced in the Senate. Thus, I suspect this bill is unlikely to get passed under a democratic administration, but I'm not sure about that.

Here is some background:

  1. US health care spending is out of control ($4.3 trillion; 18.3% of GDP in 2021); this is a massive, intractable problem, and this bill would certainly not help. 
  2. I do not think QALYs are super widely used in the US health care system as is. H.R. 485 represents an expansion of existing restrictions (see page 47) on the use of cost-effectiveness analysis in Medicare and other federal programs; the goal of this legislation is to fully ban the use of QALYs across all federal programs, which I think would include state Medicaid programs, since Medicaid is jointly financed by states and the federal government.
  3. Even in the absence of this bill, there is significant public opposition to the use of QALYs and similar metrics in US health care. (Health care rationing remains a very loaded issue in US politics.)
  4. In terms of things that are wrong with the US health care system, failure to use QALYs is a problem, but I think other things are bigger contributors to the widespread provision of low-value care.
  5. If this bill were to pass, I think (?) it'd still be possible to use things like evLYGs, which could play a similar role as QALYs in cost-effectiveness analysis, but "evenly measure any gains in length of life, regardless of the treatment’s ability to improve patients’ quality of life."

Tl;dr: I think passing this bill might be akin to shooting holes in the tires of a car that only had two wheels to begin with, and it currently looks unlikely to pass.

Thanks for bravely sharing this. I'm really sorry to hear what you've been through. 

This passage resonated with me:

Last week at EAG, I received a Swapcard message that proposed a non-platonic interaction under the guise of professional interaction. I went to an afterparty where someone I had just met—literally introduced to me moments before—put their hand on the small of my back and grabbed and held onto my arm multiple times. These might seem like minor annoyances, but I have heard and experienced that these kinds of small moments happen often to women in EA. These kinds of experiences undermine my own feelings of comfort and value in the community. 

This kind of stuff has happened to me too. Each incident has felt too minor to do anything about, primarily because if I did, I'd then have to think about it as a Bad Thing That Happened, rather than an awkward interaction; something to forget. So there is a death-by-a-thousand-cuts element to all of this, where no individual interaction has risen to the level of wanting to make a fuss, but in combination, these things change how I perceive myself in EA spaces (more male gaze-y) and how I act (a bit more guarded). 

And then I felt sad reading what followed:

This might be anecdata, as some people say, and I know obtaining robust data on these issues has its own challenges

Because it felt as if you were writing for a reader who might be inclined to doubt you. (Sorry if I am projecting; it's just that some people have raised this sort of objection.) So I just want to say: I think that in the context of an issue that is notoriously hard to study (as you note), our experiences—and the experiences of the many other women who have shared their stories—do provide strong evidence of an important, systemic issue. Thanks again for speaking up.

Thanks so much for doing this analysis and writing this up! I'm curious whether there is a principled reason for using POC as a category, rather than focusing on specific ethnic groups that are underrepresented in EA, especially given what footnote 4 says about the breakdown of EAG attendees who are POC (24% Asian, 5% Hispanic, 2% Black, 1% multiracial).  Some people have been critical of the term "POC" because they think it can gloss over this kind of information.

I think the crux of the disagreement is this: you can't disentangle the practical sociological questions from the normative questions this easily. E.g., the practical solution to "how do we feed everyone" is "torture lots of animals" because our society cares too much about having cheap, tasty food and too little about animals' suffering. The practical solution to "what do we do about crime" is "throw people in prison for absolutely trivial stuff" because our society cares too much about retribution and too little about the suffering of disadvantaged populations. And so on. Practical sociological solutions are always accompanied by normative baggage, and much of this normative baggage is bad. 

EA wouldn't be effective if it just made normative critiques ("the world is extremely unjust") but didn't generate its own practical solutions ("donate to GiveWell"). EA has more impact than most philosophy departments because it criticizes many conventional philosophical positions while also generating its own practical sociological solutions. This doesn't mean all of those solutions are right—I agree that many aren't—but EA wouldn't be EA if it didn't challenge conventional sociological wisdom.

(Separately, I'd contest that this is not a topic of interest to sociologists. Most sociology PhD curricula devote substantial time to social theory, and a large portion of sociologists are critical theorists; i.e., they believe that "social problems stem more from social structures and cultural assumptions than from individuals... [social theory] argues that ideology is the principal obstacle to human liberation.")

The consensus of most people is that conventional wisdom is pretty good when it comes to designing institutions; at least compared to what a first-principles-reasoner could come up with.

I think your characterization of conventional answers to practical sociological questions is much too charitable, and your conclusion ("there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong") is correspondingly much too strong.

Indeed, EA grew out of a recognition that many conventional answers to practical sociological questions are bad. Many of us were socialized to think our major life goals should include getting a lucrative job, buying a nice house in the suburbs, and leaving as much money as possible to our kids. Peter Singer very reasonably contested this by pointing out that the conventional wisdom is probably wrong here: the world is deeply unjust, most people and animals live very hard lives, and many of us can improve their lives while making minimal sacrifices ourselves. (And this is to say nothing about the really bad answers societies have developed for certain practical sociological questions: e.g., slavery; patrilineal inheritance; mass incarceration; factory farming.)

More generally, our social institutions are not designed to make sentient creatures' lives go well; they are designed to, for instance, maximize profits for corporations. Amazon is not usually cited as an example of an actor using evidence and reason to do as much good as possible, and company solutions are not developed with this aim in mind. (Think of another practical solution Amazon workers came up with: peeing in bottles because they risked missing their performance targets if they used the bathroom.)

I agree with many of the critiques you make of EA here, and I agree that EA could be improved by adopting conventional wisdom on some of the issues you cite. But I would suggest that your examples are cherrypicked, and that "the rot" you refer to is at least as prevalent in the broader world as it is within EA. Just because EA errs on certain practical sociological questions (e.g., peer review; undervaluing experience) does not mean that conventional answers are systematically better. 

  • 40% of our charities reach or exceed the cost-effectiveness of the strongest charities in their fields (e.g., GiveWell/ACE recommended).
  • 40% are in a steady state. This means they are having impact, but not at the GiveWell-recommendation level yet, or their cost-effectiveness is currently less clear-cut (all new charities start in this category for their first year).
  • 20% have already shut down or might in the future. 

Thanks for writing this up. I am excited about the work you are doing, but to be blunt, these success rates strike me as implausibly good. Here are a few reasons why I am skeptical:

  1. As you note, charities generally get more cost-effective as they scale, and most CE charities are still quite small. 
  2. These charities are young, and there is a long learning curve associated with building an effective organization.
  3. Doing good charitable work is hard—many charities are ineffective, and a substantial portion cause harm. Therefore, my prior is (also) that most incubated charities would not wind up being cost-effective, even if CE did a perfect job. Given this, I suspect that some of the 40% of cost-effective charities are not cost-effective, and that many of the"steady state" charities should be reclassified as "might shut down in the future" charities, although these three categories ("highly cost-effective", "having impact", and "might shut down") are vague and do not cover the range of possible outcomes here.
  4. The fact that this post pitches the program to potential applicants ("Applications are now open") also makes me somewhat more skeptical about the positive gloss.

I have read your response here, and agree that it'd be good to have an external organization do a comprehensive evaluation. 

Load more