All of LoveAndPeaceAlways's Comments + Replies

I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be beca... (read more)

I think this is a good idea, however:

I was initially confused until I realized you meant hair. According to Google hear isn't a word used for that purpose, the correct spelling is hair.

0
mikbp
9mo
Yep, sorry! Not native here.

I'd like to underline that I'm agnostic, and I don't know what the true nature of our reality is, though lately I've been more open to anti-physicalist views of the universe.

For one, if there's a continuation of consciousness after death then AGI killing lots of people might not be as bad as when there is no continuation of consciousness after death. I would still consider it very bad, but mostly because I like this world and the living beings in it and would not like them to end, but it wouldn't be the end of consciousnesses like some doomy AGI safety peo... (read more)

Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers - secular atheism and reductive materialism/physicalism and a computational theory of mind?

1
JakubK
1y
Can you highlight some specific AGI safety concepts that make less sense without secular atheism, reductive materialism, and/or computational theory of mind?

You may be aware of this already, but I think there is a clear difference between saving an existing person who would otherwise have died - and in the process reducing suffering by also preventing non-fatal illnesses - and starting a pregnancy because before starting a pregnancy the person doesn't exist yet.

7
Ariel Simnegar
1y
What is that difference, from a consequentialist perspective? (For the purpose of comparing apples to apples, let's ignore the suffering reduced by preventing an illness. What's the difference in outcome for a child between poofing them away at a young age, and preventing their birth?)

There are a couple of debate ideas I have, but I would most like to see a debate on whether ontological physicalism is the best view of the universe there is.

I would like to see someone like the theoretical physicist Sean Carroll represent physicalism, and someone like the professor Edward F. Kelly from the Division of Perceptual Studies at the University of Virginia represent anti-physicalism. The researchers at the Division of Perceptual Studies study near-death experiences, claimed past-life memories in children and other parapsychological phenomena, an... (read more)

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson wasn't mentioned.

1
Darren McKee
1y
Oops, looks like I read that last July (and didn't agree with the general thesis). Thanks for the comment. 

The movie Joker makes a good case that many criminals are created by circumstances, like mental illness, abuse and lack of support from society and other people. I still believe in some form of free will and moral responsibility of an individual, but criminals are also to some extent just unlucky.

You could study subjects, read books, watch movies and play video games, provided that these things are available. But I personally think that Buddhism is particularly optimized for solitary life, so I'd meditate, observe my mind and try to develop it and read Buddhist teachings. Other religions could also work, at least Christianity has had hermits.

What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?

I've read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his 'Death with dignity' post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who fou... (read more)

I've been studying religions a lot and I have the impression that monasteries don't exist because the less fanatic members want to shut off the more fanatic members from rest of society so they don't cause harm. I think monasteries exist because religious people really believe in the tenets of their religion and think that this is the best way for some of them to follow their religion and satisfy their spiritual needs. But maybe I'm just naive.

Does anyone here know why Center for Human-Compatible AI hasn't published any research this year even though they have been one of the most prolific AGI safety organizations in previous years?

https://humancompatible.ai/research

How tractable are animal welfare problems compared to global health and development problems?

I'm asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it's more tractable.

5
saulius
2y
I think that it's very tractable. For example, I estimated that corporate campaigns improve 9 to 120 years of chicken life per dollar spent and this improvement seems to be very significant. It would likely cost hundreds or thousands of dollars to improve a life of one human to such a degree, even in developing countries. There are many caveats to this comparison that I can talk about upon request but I don't think that they change the conclusion. Another way to see tractability is to look at the big wins for animal advocacy in 2021 or 2020. This progress is being achieved with only about $200 million spending per year (with a lot of it being non-EA money, I think).
5
Cameron.K
2y
I believe they are largely tractable, there's a variety of different intervention types (Policy, Direct work, Meta, Research), cause areas (Alt Proteins, Farmed Animals, Wild animal suffering, Insects), organisations and geographies to pursue them in. Of particular note may be potentially highly tractable and impactful work in LMIC (Africa, Asia, Middle East, Eastern Europe) I will say animal welfare is a newer and less explored area than global health but that may mean that your donation can be more impactful and make more of a difference as there could be a snowball effect from funding new high-potential intervention or research. If you are quite concerned about traceability, perhaps you could consider donating to organisations that are doing more research or meta-work to discover more tractable interventions Either way, it's not entirely clear and highly depends on your philosophy, risk tolerance, knowledge and funding counterfactuals. 

Center for Reducing Suffering is longtermist, but focuses on the issues this article is concerned about. Suffering-focused views are not very popular though, and I agree that most longtermist organizations and individuals seem to be focused on future humans more than future non-human beings, at least that's my impression, I could be wrong. Center on Long-Term Risk is also longtermist, but focused on reducing suffering among all future beings.

6
BrianK
2y
Thank you for the insights!

Thank you for answering, your reasoning makes sense if longterm charities have a higher expected impact when taking into account the uncertainty involved.

Thank you for answering, I subscribed to that tag and I will take a closer look at those threads.

Thank you for taking the time to answer my question. What you said makes a lot of sense, but I just feel that the future is inherently unpredictable and I don't think I can handle the risk factor so much.

2
Tobias Dänzer
2y
That's a perfectly fine attitude to have! In that case I would likely advice donating to short-term charities rather than long-term ones which are more speculative. I don't have as much experience with the former myself, and so have to defer to e.g. GiveWell's recommended charities and the like. Also, if you discover in a few years that you're more or less risk-averse than you'd thought, you can still reconsider where to donate. Finally, if you care about getting as much "bang for your buck" for your EA donations, keep a look out for ~yearly recurring donation matching events like this current one by Double Up Drive (though in that case it's not entirely clear to me whether they match donations outside the US, and to which extent these donation matches can be considered counterfactual).
3
Zach Stein-Perlman
2y
+1 to Tobias. A complementary framing I've found useful is: would the universe be better if we spent $1 more on bednets or $1 more on improving the long-term future?

Hi, I've been interested in EA for years, but I'm not a heavyhitter. I'm expecting to give only dozens of thousands of dollars during my life.

That said, I have a problem and I'd like some advice on how to solve it: I don't know whether to focus on shortterm organizations like Animal Charity Evaluators and Givewell or longterm organizations like Machine Intelligence Research Institute, Center for Reducing Suffering (CRS), Center on Longterm Risk (CLR), Longterm Future Fund, Clean Air Task Force and so on. It feels like longterm organizations are a huge gamb... (read more)

3
Aaron Gertler
2y
This is one of the hardest "big questions" in EA, and you've outlined what makes the question hard. You might want to wait another week or two — we have an annual post where people explain where they're giving and why. You can be notified when it goes up if you subscribe to the donation writeup tag. You can also see last year's version of that post.  Maybe some of the explanations in these posts will help you figure out what point of view makes the most sense to you!
6
NunoSempere
2y
Personally, 1. Bite all the bullets, uncertain but higher expected impact > certain but lower impact 2. It's tricky to know how good longtermist organizations are compared to each other. In the past I would have said to just defer to the LTFF, but now I feel more uncertain.