Alexander Herwix 🔸

620 karmaJoined

Participation
4

  • Organizer of Effective Altruism Cologne
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group

Comments
138

I don't think it's unreasonable to discuss the appropriateness of particular timelines per se but the fact remains that this is not the purpose or goal of the book. As I acknowledged, short to medium term timelines are helpful for motivating the relevance or importance of the issue. However, I think timelines in the 5 to 50 year range are a very common position now, which means that the book can reasonably use this as a starting point for engaging with its core interest, the conditional what if. 

Given this as a backdrop, I think it's fair to say that the author of this post is engaging in a form of straw manning. He is not simply saying: "look, the actions suggested are going to far because the situation is not as pressing as they make it out to be, we have more time"... No, he is claiming that "Yudkowsky and Soares' Book Is Empty" by blaming them for not giving an explicit argument for how to build an AGI. I mean, come on how ironic would it be if the book arguing against the building of these kinds of machine would provide the template for building them? 

So, I really fail to see the merit of this kind of critique. I mean you can disagree with the premise that we will be able to build generally intelligent machines in the nearish future but given the trajectory of current developments it seems a little bit far fetched to claim that the book is starting from an unreasonable starting point. 

As I said mutliple times now, I am not against having open debate about stuff, I am just trying to explain why I think people are not "biting" for this kind of content. 

P.S.: If you look at the draft treaty they propose, I think it's clear that they are not against stopping any and all AI R&D but specifically R&D aimed at ASI. Given the general purpose nature of AI, this will surely limit "AI progress" but one could very well argue that we already have enough societal catching up to do to where we are at right now. I also think it's quite important to keep in mind that there is no inherent "right" to unrestricted R&D. As soon as any kind of "innovation" such as "AI progress" is also affecting other people, our baseline orientation should be one of balancing interests, which can reasonably include limitations on R&D (e.g., nuclear weapons, human cloning, etc.). 

I didn’t comment on the accuracy of individual timelines but emphasized that the main topic of the book is the conditional what if… it doesn’t really make sense to critique the book at length for something it’s only tangentially touching upon to motivate the relevance of its main topic. And they are not making outrageous claims here if you look at the ongoing discourse and ramping up investments. 

It’s possible to take Yudkowsky seriously even if you are less certain on timelines and outcomes. 

It could be an interesting exercise for you to reflect on the origins of your emotional reactions to Yudkowski‘s views. 

You are not addressing the key point of my comment which is regarding the nature of their argument and your straw manning of their position. Why should I take your posts seriously if you feel the need to resort to these kind of tactics? 

I am just trying to provide you with some perspective with why people might feel the need to downvote you. If you want people like me to engage (although I didn’t downvote, I don’t really have an interest in reading your blog), I would recommend meeting us where we are: Concerned about current developments potentially leading to concentration of power or worse and looking for precautionary responses to it. Theoretical arguments are fine but your whole „confidence“ vibe is very off putting to me given the situation we find ourselves in. 

I didn’t down vote but it seems like you are attacking a straw man here… the book is explicitly focused on the conditional IF anyone builds it. They never claim to know how to build it but simply suggest that it is not unlikely to be built in the future. I don’t know in which world you are living but this starting assumption seems pretty plausible to me (and quite a few other people more knowledgeable than me on those topics such as Nobel prize and Turing Award winners…). If not in 5 then maybe in 50 years. 

I would say at this point the burden is on you to make the case that the overall topic is nothing to worry about. Why not write your own book or posts where you let your arguments speak for themselves? 

So, you do it on purpose, not out of inability? Thanks for clarifying.

I love this question and I am looking forward to see what hedonic utilitarians come up with here. This has similar vibes to computronium thought experiments but better. Thanks for pointing this question out to me :)

Thanks for sharing this! It's an entertaining read and a valuable reminder of the limits of our perspectives. I love how the cleaner shows up at the end. True koan vibes!

I don't have time to read the full post and series but the logic of your argument reminds me very much of Werner Ulrich's work. May be interesting for you to check him out. I will list suggested references in order of estimated cost/benefit. The first paper is pretty short but already makes some of your key arguments and offers a proposal for how to deal with what you call "unawareness". 

Ulrich, W. (1994). Can We Secure Future-Responsive Management Through Systems Thinking and Design? Interfaces, 24(4), 26–37. https://doi.org/10.1287/inte.24.4.26

Ulrich, W. (2006). Critical Pragmatism: A New Approach to Professional and Business Ethics. In Interdisciplinary Yearbook for Business Ethics. V. 1, v. 1,. Peter Lang Pub Inc.

Ulrich, W. (1983). Critical heuristics of social planning: A new approach to practical philosophy. P. Haupt.

I think it would be helpful to not use longtermism in this synonymous way because I think it’s prone to lead to misunderstandings and unproductive conflict. 

For example, there is a school of thought called the person affecting view, which denies that future, non-existing people have moral patient hood but would still be able to have reasonable discussions about intergenerational justice in the sense of children might want to have children, etc. 

In general, I wouldn’t characterize those views as any more or less extreme or flat-footed than weak forms of longtermism. I think these are difficult topics that are contentious by nature. 

For me, the key is to stay open-minded and seek some form of discursive resolution that allows us to move forward in a constructive and ideally for all acceptable way. (That’s a critical pragmatist stance inspired by discourse ethics)

This is why I appreciate your curiosity and willingness to engage with different perspectives, even if it’s sometimes hard to understand opposing viewpoints. Keep at it! :)

Load more