P

peterhase

5 karmaJoined May 2022

Comments
2

One thing I think is interesting is how similar some of the work is from bay area AI safety folks and other safety crowds, like the area often referred to as "AI ethics." For example, Redwood worked on a paper about safe language generation, focusing on descriptions of physical harm, and safe language generation is a long-running academic research area (including for physical harm! see https://arxiv.org/pdf/2210.10045.pdf). The deepest motivating factors behind the work may differ, but this is one reason I think there is a lot of common ground across safety research areas. 

I think I disagree with the general direction of this comment but it’s hard to state why, so I’ll just outline an alternative view:

  • Many people are building cutting-edge AI. Many of them are sympathetic to at least some safety and ethics concerns, and some are not that sympathetic to any safety or ethics concerns
  • Of course it is good to have a reputation as a good collaborator and employee. It seems only instrumentally valuable to be an “ally” to the cutting edge research, and at some point you have to be honest and tell those building AI that what they’re doing is interesting but has risks in addition to potential upsides
  • Part of building a good reputation in the field involves honestly assessing others’ work. If you agree with work from AI safety or AI ethics or AI bias people, you should just agree with them. If you disagree with their work, you should just disagree with them. “Distancing” and “aligning” yourself with certain camps is the kind of strategic move that people in research labs often view as vaguely dishonest or overly political