We run weekly meetups about AI safety. We focus on the AGI SF course biweekly. The other weeks we have meetups focused on clarifying our understanding of AI xrisk and exploring questions / discussion topics. Here members can bring in ideas from articles outside AGI SF and take time to critically reflect on their views. These meetups also have space for project work such as editing of the AI safety wiki 'Stampy', sharing efficient learning and productivity techniques and fun social activities such as games nights.
The overall goal of the group is to effectively reduce AI existential risk.
The core values include:
• AI xrisk reduction
• Good epistemology
• A friendly social environment