It is a disaster for EA. We need the EAs on the board to explain themselves, and if they made a mistake, just admit that they made a mistake and step down.
"Effective altruism" depends on being effective. If EA is just putting people in charge of other peoples' money, they make decisions that seem like bad decisions, they never explain why, refuse to change their mind whatever happens... that's no better than existing charities! This is what EA was supposed to prevent! We are supposed to be effective. Not to fire the best employees and destroy a company that is putting an incredible amount of effort into doing responsible things.
I might as well give my money to the San Francisco Symphony. At least they won't spend it ruining things that I care about.
Please, anyone who knows Helen or Tasha, ask them to reconsider.
The strategy of "get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong" seems to be the opposite of what EA is all about. Shouldn't we be using evidence and reason to figure out how to benefit others as much as possible?
When the logic is, I feel very strongly about cause area X. Therefore we should do things about X as much as possible. Anything that helps X is good. Any people excited about X are good. Any way of spending money on X is good. Well, then X could equally well be cancer research, or saving the whales, or donating to the Harvard endowment, or the San Francisco Symphony.
Some people think that, with a super-powerful AI running the world, there would be no need for traditional government. The AI can simply make all the important decisions to optimize human welfare.
This is similar to the Marxist idea of the "withering away of the state". Once perfect Communism has been achieved, there will be no more need for government.
In practice, Stalinism didn't really wither away. It was more like, Stalin gained personal control over this new organization, the Communist Party, to reinforce his own dictatorship and bend the nation to his will.
If we have transformational superhuman AI, the risk of war seems quite high. But an AI powerful enough to turn the whole world into paper clips could win a war immediately, without bloodshed. Or with lots of bloodshed, if that's what it wanted.
One possible outcome of superhuman AI is a global dictatorship. Whoever controls the superhuman AI controls the world, right? The CEO of the AI company that wins the race aligns the AI to themselves and makes themselves into an immortal god-king. At first they are benevolent. Over time it becomes impossible for the god-king to retain their humanity, as they become less and less like any normal human. The sun sets on the humanist era.
But this is turning into a science fiction story. In practice a "superhuman AI" probably won't be all-powerful like this, there will be many details of what it can and can't do that I can't predict. Or maybe the state will just wither away!
against malaria foundation don't give a high proportion of money to evil dictatorships but they do give some. Same goes for deworm the world.
I was wondering about this, because I was reading a book about the DRC - Dancing in the Glory of Monsters - which was broadly opposed to NGO activity in the country as propping up the regime. And I was trying to figure out how to square this criticism with the messages from the NGOs themselves. I am not really sure, though, because the pro-NGO side of the debate (like EA) and the anti-NGO side of the debate (like that book) seem to mostly be ignoring each other.
I think there should be some kind of small negative adjustment (even if token) from GiveWell on this front.
Yeah, I don't even know if it's the sort of thing that you can adjust for. It's kind of unmeasurable, right? Or maybe you can measure something like, the net QALYs of a particular country being a dictatorship instead of a democracy, and make an argument that supporting a dictator is less bad than the particular public health intervention is good.
I would at least like to see people from the EA NGO world engage with this line of criticism, from people who are concerned that "the NGO system in poor countries, overall, is doing more unmeasurable harm than measurable good".
I think the Wytham Abbey situation is a success for transparency. Due to transparency, many people became aware of the purchase, and were able to give public feedback that it seemed like a big waste of money, and it's embarassing to the EA cause. Now, hopefully, in the future EA decisionmakers will be less likely to waste money in this way.
It's too much to expect EA decisionmakers to never make any mistakes ever. The point of transparency is to force decisionmakers to learn from their mistakes, not to avoid ever making any mistakes.
I'm glad this contest happened, but I was hoping to see some deeper reflection. To me it seems like the most concerning criticisms of the GiveWell approach are criticisms along more fundamental lines. Such as -
This might be too much to expect from any self-reflection exercise, though.
I don't know how the criminal law works. But if it turns out that the money in the FTX Future Fund was obtained fraudulently, would it be ethical to keep spending it, rather than giving it back to the victims of the fraud?
Banning slaughterhouses is essentially a ban on eating meat, right? I can't imagine that 43% of the US public would support that, when no more than 10% of the US public is vegetarian in the first place. (Estimates vary, you say 1% in this article, and 10% is the most aggressive one I could find.)
It seems much more likely that these surveys are invalid for some reason. Perhaps the word "slaughterhouses" confused people, or perhaps people are just answering surveys based on emotion without bothering to think through what banning slaughterhouses actually means.