Abstract: A demonstration that the philosophy Effective Altruism (hereafter “EA”), particularly its emphasis on the use of the free market to collect means then used to promote human welfare, including reducing risks to human existence (our definition of EA), is contradictory, and therefore ineffectual.

Epistemic status: Modest confidence. 

  1. EA’s efforts, are intended to decrease existential risk, among others (by definition of EA).
  2. Present prevailing social, political, and economic systems (hereafter “venal systems”) encourage existential risk, and so, are non-altruistic; as non-altruistic venal systems proliferate in overwhelming influence, existential risk will increase proportionately (by observation; present venal systems take no feasible measures against existential risk. Indeed, that such a venue as LessWrong or the Alignment Blog exist to mitigate existential risk, it follows there is existential risk, and present venal systems permit that existence, or at least do not preclude it as would make LessWrong and the Alignment Blog redundant, as they are acknowledged by users not to be).
  3. EA requires present venal systems to exist, to gain the resources from the free market that are supposed to increase human welfare, preserve human lives, and mitigate existential risk (by definition of EA; present venal systems, particularly economic, are necessary conditions for the denoting “effectiveness” of EA).
  4. The above-named present venal systems, increase the number of people who act contrarily to the ideals of EA (as they require, so promote unregulated economies, and self-interested people in order for such venal systems to exist, as noted in 2)); that is, as more lives are saved, they are statistically likely to join, thus increase the proliferation of, current venal systems which encourage existential risk. (Explication: adherents to EA are already a minority, in numbers and influence, as noted in 2), that contrary to their influence, existential risk still exists, which permits the conditions noted in 2); the likelihood of disproportionate influence decreases at such lower population levels.) 

(Proof of continued minority, by cases: assume that a fixed proportion of human lives saved by EA efforts, thus still a minority, become Effective Altruists; in which case as human lives are saved, the majority will participate in existing venal systems, so such systems and their existential risks will proliferate proportionately. Alternatively, that EA’s will convince a majority of humans saved to be altruistic; but this is contrary to the observation above, and of 2), that EA’s are a population- and influence-minority. By observation too there has yet to be an argument or effort by EA’s that have made their philosophy an influence-majority over non-altruistic venal systems, which have not been supplanted. Nor could there exist such an argument, in order for EA to exist, which relies on preponderance of venal systems for support, by 3). Current venal systems win majorities without need for arguments – as there are none, any such argument being contradictory, the venal systems being self-contradictory in destruction of venality from existential risks. Consider the final case, in which of the lives saved by EA, a smaller proportion become EAs than already were so – but again there are fewer EAs, and more participants in venal, existentially risky, systems).

From 1)-4), EA’s efforts increase population; increased population increases proliferation of non-altruistic venal systems; proliferating venal systems increases existential risk; by the hypothetical syllogism, EA’s efforts increase existential risk, contrary to the definition of EA in 1).

Therefore EA is contradictory, and therefore ineffectual (as was to be demonstrated). 

(Libertarianism likewise permits the free-market malfeasance and existential risk that will result in no one left for liberties).

That this argument can exist, it follows there can be no reasoning by EA that will engender majority support, because this argument precludes any support of EA (though this threatens circular reasoning; still, as noted in proof-by-cases, any successful argument for majority support of EA, destroys EA’s economic supports).

Besides which, the consequentialist or utilitarian ethic underlying EA was subject to the Dread Conclusion, so was not of itself adequate. And, that human welfare is important, is a human bias. Nonhuman animals would not highly rank human welfare; only because their welfare is unconsidered, is human welfare well-regarded.

By the terms of the proposed ethic of “Going-On,” described in this author’s LessWrong article “Worldwork for Ethics”, EA is more morally right than the present existential-risk-inviting systems. However, the above-noted ethics upon which EA relies, entail a solution of the AI values-alignment problem, by an “anthropocentric-alignment” approach which, as author’s foregoing article demonstrates preliminarily (and a subsequent article, by vector space analysis, shall demonstrate conclusively) is impossible. To the “anthropocentric-alignment” error does this author attribute the failure of all foregoing attempts to “solve” alignment; continued erroneous efforts would result in the destruction of the only as-yet known sentient species.

Regrettably, the ethic of “Going-On” requires that eventuality to be forestalled (absent another entity capable of all possible effective procedures); and it requires those who know it, (and thus must act in obedience to it, that they act at all) to undertake ceaselessly to disseminate the ethic, and the pro-action possibilities that it enables and requires. 

It is uncredible Going-On is inexplicable; one belatedly realizes that it is ignored, as most seeking alignment thought EA correct, thus other ethics redundant. Accordingly it was necessary to nullify EA, with regret tempered by the hope Going-On – or a more effective, subsequent ethic – will be adequate to preserve universal consistency and discovery (that is, per Going-On, in the on-going effort to find something more than survival).

Please do not vote without an explanatory comment (votes are convenient for moderators, but are poor intellectual etiquette, sans information that would permit the “updating” of beliefs).





More posts like this

Sorted by Click to highlight new comments since: Today at 2:02 AM

I think it's "poor intellectual etiquette" to require people to comment along with votes: if I posting, I'm interested in whether readers find it valuable or not, even if they understandably don't want to prioritize explaining why they think I'm right or wrong. 

I don't agree that EA requires current venal systems to exist. For example, in a state communist society, or an anarchist society, or a libertarian society, you can still imagine people trying to work out how to do the most good with their resources. Of course current EAs work within current systems, but that just seems necessary to get anything done. 

Down-voted, because I think the argument's premises are flawed, and the conclusions don't necessarily follow from the premises. It relies heavily on a "fruit of the poison tree" idea that because EA gets resources from civilization, and civilization can create the tools of its destruction, EA is inherently flawed. That is nonsense. The argument could be used to dismiss any kind of action that uses resources as being morally corrupt and ineffectual. Surely at the margin there are actions that reduce existential risk more than promote it.

Curated and popular this week
Relevant opportunities