When you say “working with African leaders”, I worry that in many countries that means “paying bribes which prop up dictatorships and fund war.” How can we measure the extent to which money sent to NGOs in sub Saharan Africa is redirected toward harmful causes via taxes, bribes, or corruption ?
I’d like to push back a bit on that - it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem. But there are many rational critiques of malaria nets. Malaria nets should not be this symbol where believing in them is a core part of the EA faith.
I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.
I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.
It is really annoying for Flynn to be perceived as “the crypto candidate”. Hopefully future donations encourage candidates to position themselves more explicitly as favoring EA ideas. The core logic that we should invest more money in preventing pandemics seems like it should make political sense, but I am no political expert.
Similar issues come up in poker - if you bet everything you have on one bet, you tend to lose everything too fast, even if that one bet considered alone was positive EV.
I think you have to consider expected value an approximation. There is some real, ideal morality out there, and we imperfect people have not found it yet. But, like Newtonian physics, we have a pretty good approximation. Expected value of utility.
Yeah, in thought experiments with 10^52 things, it sometimes seems to break down. Just like Newtonian physics breaks down when analyzing a black hole. Nevertheless, expected value is the best tool we have for analyzing moral outcomes.
Maybe we want to be maximizing log(x) heee, or maybe that’s just an epicycle and someone will figure out a better moral theory. Either way, the logical principle that a human life in ten years shouldn’t be worth less than a human life today seems like a plausible foundational principle.
Another source of epistemic erosion happens whenever a community gets larger. When you’re just a few people, it’s easier to change your mind. You just tell your friends, hey I think I was wrong.
When you have hundreds of people that believe your past analysis, it gets harder to change your mind. When peoples’ jobs depend on you, it gets even harder. What would happen if someone working in a big EA cause area discovered that they no longer thought that cause area was effective? Would it be easy for them to go public with their doubts?
So I wonder how hard it is to retain the core value of being willing to change your mind. What is an important issue that the “EA consensus” has changed its mind on in the past year?
Another issue that makes it hard to evaluate global health interventions is the indirect effects of NGOs in countries far from the funders. For example this book made what I found to be a compelling argument that many NGOs in Africa are essentially funding civil war, via taxes or the replacement of government expenditure:
African politics are pretty far outside my field of expertise, but the magnitudes seem quite large. War in the Congo alone has killed millions of people over the past couple decades.
I don’t really know how to make a tradeoff here but I wish other people more knowledgeable about African politics would dig into it.
Is this forum looking to hire more people?
There is also a “startup” aspect to EA activity - it’s possible EA will be much more influential in the future, and in many cases that is the goal, so helping now can make that happen.
I feel like the net value to the world of an incremental Reddit user might be negative, even….
For one, I don’t see any intercom. (I’m on an iPhone).
For two, I wanted to report a bug that whenever writing a comment, the UI zooms in so that the comment box takes up the whole width. Then it never un-zooms.
Another bug, while writing a comment while zoomed in and scrolling left to right, the scroll bar appears in the middle of the text.
A third bug, when I get a notification that somebody has responded to my post, and view it using the drop down at the upper right, then try to re-use that menu, the X button is hidden, off the screen to the right. Seems like a similar mobile over-zoom thing.
If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.