AG

Alex Gray

1 karmaJoined Jun 2022

Comments
1

Thanks for writing this.  I think there's a couple categories that I'd add to the types of technical work, which are both super important for this kind of technical research, and I think would make this kind of document much more broadly applicable.

I think the first is social scientists -- in general more and more of technical research ends up intersecting some extremely difficult questions that have been well studied by social fields in the past.  For example, working on Truthful/Honest AI systems almost inevitably ends up in questions of epistemology or what constitutes acceptable evidence.  Evaluations about whether an AI systems outputs or decisions are harmful can be difficult to evaluate, etc etc.  I want to defer to https://distill.pub/2019/safety-needs-social-scientists/ for better arguments here, but mostly I want to clarify that technical AI alignment research needs social scientists.

The second is people who are experts in gathering and managing human data and human labelers.  Data is one of the fuels of this kind of research, and human-provided data in particular is very valuable to technical alignment research.  Human labelers can be used to evaluate a model (and decide whether or not it should be deployed) or identify issues in the outputs, etc.  Hiring and training is difficult and complex.  So I would include roles for folks that can help facilitate and accelerate human data processes as another important role in technical AI alignment research.