56Joined Mar 2021


The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson wasn't mentioned.

The movie Joker makes a good case that many criminals are created by circumstances, like mental illness, abuse and lack of support from society and other people. I still believe in some form of free will and moral responsibility of an individual, but criminals are also to some extent just unlucky.

You could study subjects, read books, watch movies and play video games, provided that these things are available. But I personally think that Buddhism is particularly optimized for solitary life, so I'd meditate, observe my mind and try to develop it and read Buddhist teachings. Other religions could also work, at least Christianity has had hermits.

What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?

I've read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his 'Death with dignity' post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who founded the modern rationalist movement on his own, who founded MIRI and the AGI alignment movement on his own, has no idea where to even begin looking for a solution, what hope do I have? I probably couldn't do anything comparable to those things on my own even if I tried my hardest for 30 years. I could thoroughly study everything Yudkowsky and MIRI have studied, which would be a lot, and after all that effort I would be in the same situation Yudkowsky is right now - no idea where to even begin looking for a solution and only knowing which approaches don't work. The only reason to do it is to gain a fraction of a dignity point, to use Yudkowsky's way of thinking.

To be clear, I don't have a fixed model in my head about AI risk, I think I can sort of understand what Yudkowsky's model is and I can understand why he is afraid, but I don't know if he's right because I can also sort of understand the models of those who are more optimistic. I'm pretty agnostic when it comes to this subject and I wouldn't be particularly surprised by any specific outcome.

I've been studying religions a lot and I have the impression that monasteries don't exist because the less fanatic members want to shut off the more fanatic members from rest of society so they don't cause harm. I think monasteries exist because religious people really believe in the tenets of their religion and think that this is the best way for some of them to follow their religion and satisfy their spiritual needs. But maybe I'm just naive.

Does anyone here know why Center for Human-Compatible AI hasn't published any research this year even though they have been one of the most prolific AGI safety organizations in previous years?

How tractable are animal welfare problems compared to global health and development problems?

I'm asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it's more tractable.

Center for Reducing Suffering is longtermist, but focuses on the issues this article is concerned about. Suffering-focused views are not very popular though, and I agree that most longtermist organizations and individuals seem to be focused on future humans more than future non-human beings, at least that's my impression, I could be wrong. Center on Long-Term Risk is also longtermist, but focused on reducing suffering among all future beings.

Thank you for answering, your reasoning makes sense if longterm charities have a higher expected impact when taking into account the uncertainty involved.

Thank you for answering, I subscribed to that tag and I will take a closer look at those threads.

Load more