Hide table of contents

Having a high-level overview of the AI safety ecosystem seems like a good thing, so I’ve created an Anki deck to help people familiarise themselves with the 167 key organisations, projects, and programs currently active in the field.

Why Anki?

Anki is a flashcard app that uses spaced repetition to help you efficiently remember information over the long term. It’s useful for learning and memorising all kinds of things – it was the the main tool I used to learn German within a year – and during the time that I’ve been testing out this deck I feel like it’s already improved my grasp of the AI safety landscape.

What you’ll learn

The deck is based on data from the AI Existential Safety Map run by AED – if you’re not familiar with them, you’ll learn who they are in this deck.

Each card includes:

  • The org’s full name
  • Its nickname/acronym (where applicable)
  • Its logo
  • A brief description of what it does
  • A link to its website for further info (accessed through the ‘Edit’ button for that card)

How to access

You can download the deck here.

Accuracy and feedback

Given the difficulty of summarising an entire org/project into one or two sentences, the descriptions come with the caveat of being necessarily reductive. They aim to capture the essence of each entity but may not fully encompass the breadth or nuance of their work. I encourage you to visit the link included in each card if you’d like a more comprehensive understanding of that particular org.

That being said, if you think any content should be modified then please comment them below, along with any problems/suggestions for the deck in general.

If the general feedback is that this seems to be useful to people, then I may in the future create one covering the most prominent people in AI safety as well.

Thank you to @George Vii for testing out the deck in advance, and credit to all the volunteers who have contributed to the AI Existential Safety Map. This project was completed while a grantee of CEEALAR.

17

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

Great stuff! Strongly upvoted. 

I just had an idea. It could be a valuable thing to have a monthly / bi-monthly newsletter for people wanting to stay up to date with new developments in the AIS ecosystem, but that don't find the time to scroll through EAF, LW, etc, on a regular basis with the goal to keep themself updated. 

Thank you for making this deck! I'm an avid Anki user and will start on this today.

I'm curious about how useful decks like these would be for people. I'm going through AGISF right now and making a bunch of cards for studying and the thought of publishing them for others came up, but wasn't sure if it was worth the effort (to polish them for public use).

@Bryce Robertson, any thoughts on this? Were you approached to do this or did you come up with your own reasons as to why you started this project? If this is something that would be valuable for other resources I'd be quite excited to work on this.

The idea came about because I was looking for ways I could use Anki beyond language learning and figured this could be useful, then decided that if it seems useful for me then presumably for others too.

When I told a few people I was working on this, I generally didn’t get particularly excited feedback. It seemed like this may at least to some degree be because people are sceptical as to the quality of shared decks, which is partly why I put a lot of time into making this one as well-designed as possible.

That’s also the reason I would personally be keen to try out someone else's deck on core concepts from an AISF course or similar, but with the caveat that if it didn’t meet a pretty high quality standard then I’d likely not use it and make one myself instead. FWIW, I used the Ultimate Geography shared deck as inspiration for a very well-made deck.

Hope that’s useful!

Interesting, I'd be happy to give you feedback on how I like the cards (I have a reminder for ~2 months from now). For transparency, my initial goal is to have a better sense of all orgs out in the ecosystem and have it in a form that I can slow-feed to myself, not necessarily memorize these for the rest of my life.

Maybe the people you talked to didn't think having these memorized would provide benefit? Not sure if they were already fans of SRS before. I'd argue that having even a "Fermi estimate" equivalent of what people are doing and what options are out there have benefits for knowing what to apply to, what to suggest to others, etc.

I can't promise that my cards are that high quality! I almost only use cloze deletions and that may not be preferable. I do source all of my links and the source text that I'm roughly basing my card on. If you're open to taking a look at them it'd be great to get quick feedback on whether they're helpful or not, and especially whether it'd be useful to put more work into for a deck of the entire project.

Feedback would be great, thanks!

I completely agree that having a broad overview of what's going on in the ecosystem can be useful in many ways, and that a deck like this should be able to help with that – hopefully!

I'd be more than happy to check out your deck – feel free to send me a DM. I've never used cloze deletions on Anki so I'd be especially intrigued to see how that works.

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
titotal
 ·  · 35m read
 · 
None of this article was written with AI assistance. Introduction There have been many, many, many attempts to lay out scenarios of AI taking over or destroying humanity. What they tend to have in common is an assumption that our doom will be sealed as a result of AI becoming significantly smarter and more powerful than the best humans, eclipsing us in skill and power and outplaying us effortlessly. In this article, I’m going to do a twist: I’m going to write a story (and detailed analysis) about a scenario where humanity is disempowered and destroyed by AI that is dumber than us, due to a combination of hype, overconfidence, greed and anti-intellectualism. This is a scenario where instead of AI bringing untold abundance or tiling the universe with paperclips, it brings mediocrity, stagnation, and inequality. This is not a forecast. This story probably won’t happen. But it’s a story that reflects why I am worried about AI, despite being generally dismissive of all those doom stories above. It is accompanied by an extensive, sourced list of present day issues and warning signs that are the source of my fears. This post is divided into 3 parts: Part 1 is my attempt at a plausible sounding science fiction story sketching out this scenario, starting with the decline of a small architecture firm and ending with nuclear Armageddon. In part 2 I will explain, with sources, the real world current day trends that were used as ingredients for the story. In part 3 I will analysise the likelihood of my scenario, why I think it’s very unlikely, but also why it has some clear likelihood advantages over the typical doomsday scenario. The story of Slopworld In the nearish future: When the architecture firm HBM was bought out by the Vastum corporation, they announced that they would fire 99% of their architects and replace them with AI chatbots. The architect-bot they unveiled was incredibly impressive. After uploading your site parameters, all you had to do was chat with
Relevant opportunities