Talking about AI safety is good — but trying to have an actual impact is even better. We plan on doing a group project over the next few months, with the short-term goal of publishing in a workshop (long-term goal: solving the alignment problem). Should be a good experience for anyone wanting to try their hand at AI safety research & gain some experience. We’ll kick off by brainstorming a few ideas that we can work on together.
Read the discussion guide: https://docs.google.com/.../14c6uLcmQFzlRSd.../edit...
