I think this is a good idea, however:
I was initially confused until I realized you meant hair. According to Google hear isn't a word used for that purpose, the correct spelling is hair.
I'd like to underline that I'm agnostic, and I don't know what the true nature of our reality is, though lately I've been more open to anti-physicalist views of the universe.
For one, if there's a continuation of consciousness after death then AGI killing lots of people might not be as bad as when there is no continuation of consciousness after death. I would still consider it very bad, but mostly because I like this world and the living beings in it and would not like them to end, but it wouldn't be the end of consciousnesses like some doomy AGI safety peo...
Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers - secular atheism and reductive materialism/physicalism and a computational theory of mind?
You may be aware of this already, but I think there is a clear difference between saving an existing person who would otherwise have died - and in the process reducing suffering by also preventing non-fatal illnesses - and starting a pregnancy because before starting a pregnancy the person doesn't exist yet.
There are a couple of debate ideas I have, but I would most like to see a debate on whether ontological physicalism is the best view of the universe there is.
I would like to see someone like the theoretical physicist Sean Carroll represent physicalism, and someone like the professor Edward F. Kelly from the Division of Perceptual Studies at the University of Virginia represent anti-physicalism. The researchers at the Division of Perceptual Studies study near-death experiences, claimed past-life memories in children and other parapsychological phenomena, an...
The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson wasn't mentioned.
The movie Joker makes a good case that many criminals are created by circumstances, like mental illness, abuse and lack of support from society and other people. I still believe in some form of free will and moral responsibility of an individual, but criminals are also to some extent just unlucky.
You could study subjects, read books, watch movies and play video games, provided that these things are available. But I personally think that Buddhism is particularly optimized for solitary life, so I'd meditate, observe my mind and try to develop it and read Buddhist teachings. Other religions could also work, at least Christianity has had hermits.
What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?
I've read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his 'Death with dignity' post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who fou...
I've been studying religions a lot and I have the impression that monasteries don't exist because the less fanatic members want to shut off the more fanatic members from rest of society so they don't cause harm. I think monasteries exist because religious people really believe in the tenets of their religion and think that this is the best way for some of them to follow their religion and satisfy their spiritual needs. But maybe I'm just naive.
Does anyone here know why Center for Human-Compatible AI hasn't published any research this year even though they have been one of the most prolific AGI safety organizations in previous years?
How tractable are animal welfare problems compared to global health and development problems?
I'm asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it's more tractable.
Center for Reducing Suffering is longtermist, but focuses on the issues this article is concerned about. Suffering-focused views are not very popular though, and I agree that most longtermist organizations and individuals seem to be focused on future humans more than future non-human beings, at least that's my impression, I could be wrong. Center on Long-Term Risk is also longtermist, but focused on reducing suffering among all future beings.
Thank you for answering, your reasoning makes sense if longterm charities have a higher expected impact when taking into account the uncertainty involved.
Thank you for answering, I subscribed to that tag and I will take a closer look at those threads.
Thank you for taking the time to answer my question. What you said makes a lot of sense, but I just feel that the future is inherently unpredictable and I don't think I can handle the risk factor so much.
Hi, I've been interested in EA for years, but I'm not a heavyhitter. I'm expecting to give only dozens of thousands of dollars during my life.
That said, I have a problem and I'd like some advice on how to solve it: I don't know whether to focus on shortterm organizations like Animal Charity Evaluators and Givewell or longterm organizations like Machine Intelligence Research Institute, Center for Reducing Suffering (CRS), Center on Longterm Risk (CLR), Longterm Future Fund, Clean Air Task Force and so on. It feels like longterm organizations are a huge gamb...
I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be beca... (read more)