against malaria foundation don't give a high proportion of money to evil dictatorships but they do give some. Same goes for deworm the world.
I was wondering about this, because I was reading a book about the DRC - Dancing in the Glory of Monsters - which was broadly opposed to NGO activity in the country as propping up the regime. And I was trying to figure out how to square this criticism with the messages from the NGOs themselves. I am not really sure, though, because the pro-NGO side of the debate (like EA) and the anti-NGO side of the debate (like that book) seem to mostly be ignoring each other.
I think there should be some kind of small negative adjustment (even if token) from GiveWell on this front.
Yeah, I don't even know if it's the sort of thing that you can adjust for. It's kind of unmeasurable, right? Or maybe you can measure something like, the net QALYs of a particular country being a dictatorship instead of a democracy, and make an argument that supporting a dictator is less bad than the particular public health intervention is good.
I would at least like to see people from the EA NGO world engage with this line of criticism, from people who are concerned that "the NGO system in poor countries, overall, is doing more unmeasurable harm than measurable good".
I think the Wytham Abbey situation is a success for transparency. Due to transparency, many people became aware of the purchase, and were able to give public feedback that it seemed like a big waste of money, and it's embarassing to the EA cause. Now, hopefully, in the future EA decisionmakers will be less likely to waste money in this way.
It's too much to expect EA decisionmakers to never make any mistakes ever. The point of transparency is to force decisionmakers to learn from their mistakes, not to avoid ever making any mistakes.
I'm glad this contest happened, but I was hoping to see some deeper reflection. To me it seems like the most concerning criticisms of the GiveWell approach are criticisms along more fundamental lines. Such as -
This might be too much to expect from any self-reflection exercise, though.
I don't know how the criminal law works. But if it turns out that the money in the FTX Future Fund was obtained fraudulently, would it be ethical to keep spending it, rather than giving it back to the victims of the fraud?
Banning slaughterhouses is essentially a ban on eating meat, right? I can't imagine that 43% of the US public would support that, when no more than 10% of the US public is vegetarian in the first place. (Estimates vary, you say 1% in this article, and 10% is the most aggressive one I could find.)
It seems much more likely that these surveys are invalid for some reason. Perhaps the word "slaughterhouses" confused people, or perhaps people are just answering surveys based on emotion without bothering to think through what banning slaughterhouses actually means.
This explanation of events seems to contradict several of SBF's public statements, such as:
"FTX has enough to cover all client holdings."
"We don't invest client assets (even in treasuries)."
I guess we'll know more for sure in the coming days. One big open question for EA is whether SBF's money was obtained through fraudulent or illegal activities. As far as I can tell, it is too soon to tell.
In the last few hours, Coindesk reported that Binance is "strongly leading towards" not doing the FTX acquisition.
I believe the title of this article is misleading - FTX.com was not technically bought out by Binance. Binance signed a non-binding letter of intent to buy FTX.com. Sometimes this is just a minor detail, but in this case it seems quite important. As of the time I am writing this comment (9 a.m. California time on November 9) Polymarket shows an 81% chance that Binance will pull out of this deal.
I am not an expert in crypto, but I think people should not assume that this acquisition will go through. It is possible that FTX will just become insolvent. See the relevant Polymarket:
Some people think that, with a super-powerful AI running the world, there would be no need for traditional government. The AI can simply make all the important decisions to optimize human welfare.
This is similar to the Marxist idea of the "withering away of the state". Once perfect Communism has been achieved, there will be no more need for government.
In practice, Stalinism didn't really wither away. It was more like, Stalin gained personal control over this new organization, the Communist Party, to reinforce his own dictatorship and bend the nation to his will.
If we have transformational superhuman AI, the risk of war seems quite high. But an AI powerful enough to turn the whole world into paper clips could win a war immediately, without bloodshed. Or with lots of bloodshed, if that's what it wanted.
One possible outcome of superhuman AI is a global dictatorship. Whoever controls the superhuman AI controls the world, right? The CEO of the AI company that wins the race aligns the AI to themselves and makes themselves into an immortal god-king. At first they are benevolent. Over time it becomes impossible for the god-king to retain their humanity, as they become less and less like any normal human. The sun sets on the humanist era.
But this is turning into a science fiction story. In practice a "superhuman AI" probably won't be all-powerful like this, there will be many details of what it can and can't do that I can't predict. Or maybe the state will just wither away!