I just browsed through it, their reasons for not doing so is also described in a section in the report.
I think it'll be great if this post is also on GiveDirectly's website, perhaps under the blogs section?
Crossposting plex's comment from LessWrong:
Updates!
- Have been continually adding to this, up to 49 online communities.
- Added a student groups section, with 12 entries and some extra fields (website, contact, calendar, mailing list), based on the AGISF list (in talks with the maintainer to set up painless syncing between the two systems).
- Made a form where you can easily add entries.
Still getting consistent traffic, happy to see it getting used :)
Great questions.
On question 4, I don't personally know of any groups based in Asia, but feel free to check out this database of AI Safety relevant communities, and join any of them.
Would you recommend Probability Theory: The Logic of Science to people with little math background?
This is nice, but I'd also be interested to see quantification of moral weights for different animals when accounting for all the factors besides neuron count, and how much it differs from solely using neuron count alone.
I suspect a large part of the crux is the definition of AGI itself. I don't know many people who think that an agent / system must fulfill all of the above criteria to qualify as 'AGI'. I personally use the term AGI to refer to systems that have at least human-level capabilities at all tasks that a human is capable of performing, regardless of whether the system posses other properties like consciousness and free will.
On a separate matter, I think it might be a good idea if there is a dual voting system for posts, just like comments, where people can upvote/downvote and also agree/disagree vote. This is a post that I would upvote but strong disagree on. In the meantime I gave it an upvote anyway, since I like posts that at least attempt to constructively challenge prevailing worldviews, and also to balance out all the downvotes.