zeshen

68Joined May 2021

Comments
14

Crossposting plex's comment from LessWrong:

Updates!

  1. Have been continually adding to this, up to 49 online communities.
  2. Added a student groups section, with 12 entries and some extra fields (website, contact, calendar, mailing list), based on the AGISF list (in talks with the maintainer to set up painless syncing between the two systems).
  3. Made a form where you can easily add entries.

Still getting consistent traffic, happy to see it getting used :)

Answer by zeshenJan 15, 202320

Great questions. 
On question 4, I don't personally know of any groups based in Asia, but feel free to check out this database of AI Safety relevant communities, and join any of them. 

Would you recommend Probability Theory: The Logic of Science to people with little math background?

This is nice, but I'd also be interested to see quantification of moral weights for different animals when accounting for all the factors besides neuron count, and how much it differs from solely using neuron count alone.

I like this post much more than your previous post.

Confidence-in-path rather than confidence-in-results.

Nicely said. 

--

See you at GatherTown soon!

I really like this post, especially as someone who is fairly anxious when writing for fear of being judged as ignorant.  I definitely agree that we should promote an environment conducive for people to say wrong things

However, I don't fully agree with  the notion of celebrating the self-confidence of people who "declare that think they can do more good than Peter Singer". I'm quite likely misinterpreting what you mean by confidence as over-confidence, but just on the face value of it, I prefer claims to have an appropriate level of confidence associated with them. When someone makes a strong claim, I'd like to know whether the person is a domain expert with good epistemics who has done extensive research to arrive at the conclusion, or an innocent kid who claims to have discovered the most important thing in the world. Perhaps just stating epistemic status upfront would solve the problem. And perhaps over-confident people who are just about to take your advice to celebrate confidence and stop rewarding modesty should just reverse the advice

On a tangential note, I sometimes find myself doing 'defensive writing' not just for defensive reasons, but also to try to convey what I mean to the reader as accurately as I can by ruling out everything else.

Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here's my take:

Are there AI risk scenarios which involve narrow AIs?

Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.

Why does most AI risk research and writing focus on artificial general intelligence?

A misaligned AGI is a very direct pathway to x-risk, where an AGI that pursues some goal in an extremely powerful way without having any notion of human values could easily lead to human extinction. The question is how to make an AI that's more powerful than us do what we want it to do. Many other failures modes like bad actors using tool (narrow) AIs seem less likely to lead directly to x-risk, and is also more of a coordination problem than a technical problem. 

Point taken, though EA seems to welcome criticisms a lot to the point of giving out large prizes for good criticisms.  Scott Alexander also argues that EA actually goes too far in taking criticisms on board. 

 

Load More