U

UriKatz

75 karmaJoined Oct 2014

Posts
1

Sorted by New

Comments
37

I applaud you for writing this post.

There is a huge difference between statement (a): "AI is more dangerous than nuclear war", and statement (b): "we should, as a last resort, use nuclear weapons to stop AI". It is irresponsible to downplay the danger and horror of (b) by claiming Yudkowsky is merely displaying intellectual honesty by making explicit what treaty enforcement entails (not the least because everyone studying or working on international treaties is already aware of this, and is willing to discuss it openly). Yudkowsky is making a clear and precise declaration of what he is willing to do, if necessary. To see this, one only needs to consider the opposite position, statement (c): "we should not start nuclear war over AI under any circumstance". Statement (c) can reasonably be included in an international treaty dealing with this problem, without that treaty loosing all enforceability. There are plenty of other enforcement mechanisms. Finally, the last thing anyone defending Yudkowsky can claim is that there is a low probability we will need to use nuclear weapons. There is a higher probability of AI research continuing, than of AI research leading to human annihilation. Yudkowsky is gambling that by threatening the use of force he will prevent a catastrophe, but there is every reason to believe his threats increase the chances of a similarly devastating catastrophe.

It seems to me that no amount of arguments in support of individual assumptions, or a set of assumptions taken together, can make their repugnant conclusions more correct or palatable. It is as if Frege’s response to Russel’s paradox were to write a book exalting the virtues of set theory. Utility monsters and utility legions show us that there is a problem either with human rationality or human moral intuitions. If not them than the repugnant conclusion does for sure, and it is an outcome of the same assumptions and same reasoning. Personally, I refuse to bite the bullet here which is why I am hesitant to call myself a utilitarian. If I had to bet, I would say the problem lies with assumption 2. People cannot be reduced to numbers either when trying to describe their behavior or trying to guide it. Appealing to an “ideal” doesn’t help, because the ideal is actually a deformed version. An ideal human might have no knowledge gaps, no bias, no calculation errors, etc. but why would their well being be reducible to a function?

(note that I do not dispute that from these assumptions Harsanyi’s Aggregation Theorem can be proven)

the quest for an other-centered ethics leads naturally to utilitarian-flavored systems with a number of controversial implications.

This seems incorrect. Rather, it is your 4 assumptions that “lead naturally” to utilitarianism. It would not be hard for a deontologist to be other-focused simply by emphasizing the a-priori normative duties that are directed towards others (I am thinking here of Kant’s duties matrix: perfect / imperfect & towards self / towards others). The argument can even be made, and often is, that the duties that one has towards one’s self are meant to allow one to benefit others (i.e. skill development). If by other-focused you mean abstracting from one’s personal preferences, values, culture and so forth, deontology might be the better choice, since its use of a-priori reasoning places it behind the veil of ignorance by default.

Only read the TL;DR and the conclusion, but I was wondering why the link between jhana meditation and brain activity matters? Even if we assume materialism, the Path in its various forms (I am intimately familiar with the Buddhist one) always includes other steps, and only taken together do they lead to increased happiness and mental health. My thinking is that we should go in one of two direction: direct manipulation of the brain, or a holistic spiritual approach. This middle way, ironically, seems to leave out the best of both worlds.

I am responding to the newer version of this critique found [here] (https://www.radicalphilosophy.com/article/against-effective-altruism).

Someone needs to steel man Crary's critique for me, because as it stands I find it very weak. The way I understand this article:

  1. The institutional critique - Basically claims 2 things: a) EA's are searching for their keys only under the lamppost. This is a great warning for anyone doing quantitate research and evaluation. EA's are well aware of it and try to overcome the problem as much as possible; b) EA is addressing symptoms rather than underlying causes, i.e. distributing bed-nets instead of overthrowing corrupt governments. This is fair as far as it goes, but the move to tackling underlying causes does not necessarily require abandoning the quantitative methods EA champions, and it is not at all clear that we shouldn't attempt to alleviate symptoms as well as causes.

  2. The philosophical critique - Essentially amounts to arguing that there are people critical of consequentialism and abstract conceptions of reason. More power to them, but that fact in itself does not defeat consequentialism, so in so far as EA relies on consequentialism, it can continue to do so. A deeper dive is required to understand the criticisms in question, but there is little reason for me to assume at this point that they will defeat, or even greatly weaken, consequentialist theories of ethics. Crary actually admits that in academic circles they fail to convince many, but dismisses this because in her opinion it is "a function of ideological factors independent of [the arguments'] philosophical credentials".

  3. The composite critique - adds nothing substantial except to pit EA against woke ideology. I don't believe these two movements are necessarily at odds, but there is a power struggle going on in academia right now, and it is clear which side Crary is on.

  4. EA's moral corruption - EA is corrupt because it supports global capitalism. I am guilty as charged on that count, even as I see capitalism's many, many flaws and the need to make some drastic changes. Still, just like democracy, it is the best of evils until we come up with something better. Working within this system to improve the lives of others and solve some pressing worldwide problems seems perfectly reasonable to me.

As an aside I will mention that attacking "earning to give" without mentioning the concept of replicability is attacking nothing at all. When doing good try to be irreplaceable, when earning money on Wall Street, make sure you are completely replaceable, you might earn a little less but you will minimize your harm.

Finally, it is telling that Crary does not once deal with longtermist ideas.

What would you say are the biggest benefits of being part of an EA faith group?

I am not sure about the etiquette of follow up questions in AMAs, but I’ll give it a go:

Why does being mainstream matter? If, for example, s-risk is the highest priority cause to work on, and the work of a few mad scientists is what is needed to solve the problem, why worry about the general public’s perception of EA as a movement, or EA ideas? We can look at growing the movement as growing the number of top performers and game-changers, in their respective industries, who share EA values. Let the rest of us enjoy the benefit of their labor.

Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.

(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)

Load more