I published this video on my YouTube channel yesterday. 

In short, Google is clearly investing massively in AI performance, and is currently also going after its employees that are raising concerns about the ethics of its algorithms. This is a huge deal, as the company is developing the most sophisticated AIs ever built (at least 6x times bigger than OpenAI's), and these algorithms will likely be deployed at a global scale, before any internal safety or ethical tests, and without any possible external audit.

This is a worst-case scenario in terms of AI Safety governance. The AI race is creating huge pressures for performance over safety. More importantly, there is currently nowhere nearly enough social or legal pressure to slow down the race, and to promote safety and ethics instead. This issue seems to be vastly neglected even by EAs. Yet, we're talking here about the ethics of the world's most advanced AI company, with massive global-scale consequences, as Google's algorithms have repeatedly been linked with serious national security (especially radicalization, as in the case of the Capitol riots), epistemic crisis and public health concerns.

You'll find more information and resources in the video script, in this EA forum post and in this other EA forum post. Also, with colleagues, we are maintaining this Tournesol wiki to provide a global view of the problem and list resources, and to also documents our Tournesol project to solve the ethics of recommendation algorithms, which was discussed in this LW post.

6

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since: Today at 1:40 PM

Thank you for this post. My stance is that when engaging with hot-button topics like these, we need to pay particular attention to the truthfulness and the full picture of the topic. I am afraid that your video simplifies the reasons for the dismissal of the two researchers quite a bit to "they were fired for being critical of the AI", and would benefit from giving a fuller account. I do not want to endorse any particular side here, but it seems important to mention that 

  1. Google wanted the paper to mention that  some techniques exist to mitigate the problems mentioned by Dr. Gebru. "Similarly, it [the paper] raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues"
  2.  Dr. Gebru sent an email to colleagues telling them to stop working on one of their assigned  tasks (diversity initiatives) because she did not believe those initiatives were sincere.  "Stop writing your documents because it doesn’t make a difference"
  3. Google alleges that Dr. Mitchell  shared company correspondence with outsiders.  

Whether or not you think any of this justifies the dismissal, these points should be mentioned in a truthful discussion.

This video seems quite sensationalist, and in many places the argument seems like a stretch. For example, you say that Timnit was fired, but the only evidence of this seems to be that she claims this is the case - in contrast, Google says she offered to resign:

Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.

Even if you thought Google is mistaken/lying I think you should at least mention this.

I would also encourage you to submit text rather than video to the forum in the future. In many cases you mentioned things that I would like to respond to - for example, the idea that Google's responsibility for searches linking to bad medical advice is similar to Boeing's responsibility for plane crashes - but it is very hard to do so without text to easily search, analyse and quote.

On text vs. video: I agree with the general point that text is easier to respond to, but in this case, the video's script was linked in the post (maybe not at the time you made this comment?)

xccf
3y20
0
0

I feel very conflicted about this.

On the one hand, we don't want researchers at Google to feel any reluctance to blow the whistle on ethical issues with Google's AI algorithms.

On the other hand, I'm not convinced that the original founders of the AI ethics group were the right people for the job--you mentioned radicalization; one of them responded with "You can go fuck yourself" when asked a question about the ethics of political violence.  The new ethics head says "what I’d like to do is have people have [the conversation about AI ethics] in a more diplomatic way", which seems like a good thing.  I'm not optimistic about a future where the ethics of our AIs are determined by whoever yells the loudest on social media, but currently the ethics discussion in the ML community seems very heated.

For context, the specific 'question about the ethics of political violence' was  itself somewhat inflammatory:

"So you’re in favor of mob violence, as long as it comes from the left?"
https://twitter.com/pmddomingos/status/1346940377840848898
 

Curated and popular this week
Relevant opportunities