Please let me know the flaw in my logic, or if it is sound. I'm a big fan of EA but this was brought up in a discussion with a friend and I've been mulling over it.
1. If donating $x to cancer research saves 1, but donating $x to against malaria saves 10, then against malaria would be the correct choice based on the belief in effective altruism, correct?
1.1 Assume that without that $x, the 1 or 10 people (in the group that the money was not given to) will die.
1.2 Assume also that you are fully aware of both options in 1 as well as the repercussions in item 1.1.
2. By donating to against malaria, by the opportunity cost you're indirectly killing that one person with cancer.
3. Assume you had the option to directly kill 1 person to save 10 or directly kill the 10 to save 1. (Just a hypothetical, not advocated for as per forum rules).
4. In item 1, you are indirectly killing 1 to save 10. In point 3, if you choose the first option you are directly killing 1 to save 10.
5. In both situations, you had knowledge that saving the 10 would result in killing the 1.
6. The only difference is in the intent to kill that one person present in the second direct situation (item 3) but not the first (item 1).
7. If you are aware that you will be indirectly killing them in the second situation, then what different does the actual intent to kill make?
The first option in item 3 intuitively seems wrong, but it seems to fall in line with the beliefs of effective altruism, so can someone help me identify my flaw?
A short answer might be "In real life, people view these two scenarios very differently, and ignoring psychology and sociology may get you some interesting thought experiments but will not lead you anywhere useful when working with actual humans."
Or to quote the EA Guiding Principles:
"Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive." Not killing people seems like a pretty basic requirement for trust and cooperation. There are socially-agreed-upon exceptions like governments having use of force, but even those are divisive.
Some other writing that's addressed this:
https://www.lesswrong.com/posts/prb8raC4XGJiRWs5n/consequentialism-need-not-be-nearsighted
https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans
Thank you very much! The links were especially helpful - the doctor scenario in the first example is pretty much what I was talking about above, so their explanation of why it would be unwise to kill makes a lot of sense!