S

spra 🔸

47 karmaJoined

Bio

EA + AI Safety university community builder

Posts
1

Sorted by New

Comments
2

One framing that has helped me internalize this idea is realizing that me and [prospective EA-aligned employer] are on the same side, and we have (generally) the same goals. If there was another candidate much better than me, I should prefer them to be hired over me being hired. In my experience this has helped shift my focus from "I need to get hired so I can make the most impact" to "I need to become the best possible candidate so that EA organizations/community have a better talent pool to draw from so that they can make the most impact". This framing feels especially helpful in very competitive fields like technical ai safety.

IMO one way in which EA is very important to AI Safety is in cause prioritization between research directions. For example, there's still a lot of money + effort (e.g. GDM + Anthropic safety teams) going towards mech interp research despite serious questioning to whether it will help us meaningfully decrease x-risk. I think there's a lot of people who do some cause prioritization, come to the conclusion that they should work on AI Safety, and then stop doing cause prio there. I think that more people even crudely applying the scale, tractability, neglectedness framework to AI Safety research directions would go a long way for increasing the effectiveness of the field at decreasing x-risk.