Hide table of contents

The AI Safety Fundamentals courses are one of the best ways to learn about AI safety and prepare to work in the field.

BlueDot Impact facilitates the courses several times per year, and the curricula are available online for anyone to read. 

The “Alignment” curriculum is created and maintained by Richard Ngo (OpenAI), and the “Governance” curriculum was developed in collaboration with a wide range of stakeholders. 

You can now listen to most of the core readings from both courses:

AI Safety Fundamentals: Alignment
Gain a high-level understanding of the AI alignment problem and some of the key research directions which aim to solve it.


Listen online or subscribe:
Apple Podcasts | Google Podcasts | Spotify | RSS

AI Safety Fundamentals: Governance
Gain foundational knowledge for doing research or policy work on the governance of transformative AI.

Listen online or subscribe:
Apple Podcasts | Google Podcasts | Spotify | RSS

We've also made narrations for some readings from the advanced “Alignment 201” course, and we may record more later this year:

AI Safety Fundamentals: Alignment 201
Gain enough knowledge about alignment to understand the frontier of current research discussions. 

Listen online or subscribe:
Apple Podcasts | Google Podcasts | Spotify | RSS

Apply to join the “AI Safety Fundamentals Governance Course” July cohort!

Gain foundational knowledge for doing research or policy work on the governance of transformative AI.

Successful applicants will participate in the AI Governance course with weekly virtual classes, and join the AI Safety Fundamentals community.

Apply before 26th June 2023!

https://apply.aisafetyfundamentals.com/governance


Thoughts, feedback, suggestions?

These narrations were created by Perrin Walker (TYPE III AUDIO) on behalf of BlueDot Impact, with support from the rest of the team at TYPE III AUDIO.

We would love to hear your feedback. Do you find the narrations helpful? How could they be improved? What other AI safety material would you like to listen to? Please comment below, complete our feedback form, or write to team@type3.audio.

101

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

Can I promote your courses without restraint on Rational Animations? I think it would be a good idea since people can go through the readings by themselves. My calls to action would be similar to this post I made on the Rational Animations' subreddit: https://www.reddit.com/r/RationalAnimations/comments/146p13h/the_ai_safety_fundamentals_courses_are_great_you/

That sounds great to me, thanks!

does anyone know what is the reasoning behind naming change (from AGI to AI safety fundamentals)?

We'll aim to release a short post about this by the end of the week!

Some possible bugs: 

*When I click on the "listen online" option it seems broken (using this on a mac)

*When I click on the "AGI safety fundamentals" courses as podcasts, they take me to the "EA forum curated and popular" podcast. Not sure if this is intentional, or if they're meant to point to a podcast containing just the course

Thanks! Now fixed.

[anonymous]1
2
0

this is great, thanks! listening is so much easier for me; i can easily listen and comprehend for 8+ hours a day, but with reading i get distracted easily after less than an hour, partly because the act of scanning words takes active focus, but comprehending and thinking are easy for me. (i might have something adhd-adjacent)

i was looking into ai text-to-speech readers before, since there's lots i'd like to read, but i couldn't find a good one. (https://www.naturalreaders.com/online/ is okay, but not ideal for me, not near the quality of solenoid entity's readings of the sequences.)

I also sometimes use naturalreaders. Unfortunately I find it a bit... unnatural at times.

I've been really enjoying Type III Audio's reader on this forum, though!

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr