Update, 12/7/21: As an experiment, we're trying out a longer-running Open Thread that isn't refreshed each month. We've set this thread to display new comments first by default, rather than high-karma comments.
If you're new to the EA Forum, consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
(You can also put this info into your Forum bio.)
If you have something to share that doesn't feel like a full post, add it here!
(You can also create a Shortform post.)
Open threads are also a place to share good news, big or small. See this post for ideas.
I took another look at that section, interesting to learn more about the alchemists.
I think most AI alignment researchers consider 'sentience' to be unimportant for questions of AI existential risk - it doesn't turn out to matter whether or not an AI is conscious or has qualia or anything like that. [1] What matters a lot more is whether AI can model the world and gain advanced capabilities, and AI systems today are making pretty quick progress along both these dimensions.
My favorite overview of the general topic is the AGI Safety Fundamentals course from EA Cambridge. I found taking the actual course to be very worthwhile, but they also make the curriculum freely available online. Weeks 1-3 are mostly about AGI risk and link to a lot of great readings on the topic. The weeks after that are mostly about looking at different approaches to solving AI alignment.
As for what has changed specifically in the last 8 years. I probably can't do the topic justice, but a couple things that jump out at me:
Links on inner alignment: Canonical post on inner alignment, Article explainer, Video explainer
Here I'll list out some of those previously unsolved problems along with AI advances since 2015 that have solved them: Beating humans at Go (AlphaGo), beating humans at StarCraft (AlphaStar), biological protein folding (AlphaFold), having advanced linguistic/conversational abilities (GPT-3, PaLM), generalizing knowledge to competence in new tasks (XLand), artistic creation (DALL·E 2), multi-modal capabilities like combined language + vision + robotics (SayCan, Socratic Models, Gato).
Because of these rapid advances, many people have updated their estimates of when transformative AI will arrive to many years sooner than they previously thought. This cuts down on the time we have to solve the alignment problem.
--
[1]: It matters a lot whether the AI is sentient for moral questions around how we should treat advanced AI. But those are separate questions from AI x-risk.