The responsible/fair AI community (exemplified by Timnit Gebru) doesn't seem to get along very well with the EA-aligned beneficial/safe AI community.

Where can I find resources on their relationship and philosophical differences? Besides Gebru, who are some major thinkers in responsibility/fairness?

6

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

I would call the cluster "AI ethics". But there's no hard cutoff, no sufficient philosophical difference: it's mostly just social clustering. Here's my short diplomatic piece about the gap

We should do our best to resist forming explicit competing factions; as Prunkl and Whittlestone note, it's all one space. Here's a principled argument for doing this.

 

(Though it is hard to avoid being factional when one group are being extremely factional at you. And we don't need to think that each point in the space is equally worrying.)

I like Jon Kleinberg, Zachary Lipton, Carolyn Ashurst, Andrew Trask, Shakir Mohamed, Hanna Wallach, Michael Kearns, Cynthia Rudin, Yonadav Shavit, Deborah Raji, Aaron Roth, Adrian Weller, McKane Andrus, Subbarao Kambhampati, Iason Gabriel, Max Langenkamp, Arvind Narayanan. Zoe Cremer is hard to classify but shares their animus. David Manheim, Andrew Critch, and Dylan Hadfield-Menell cross the hall to some extent. You can look up AIES papers for more as well as FaccT. The big names tend to be less fair (ha). (I've never seen anyone near the other cluster make such a list about safety people.)

To add to the other papers coming from the "AI safety / AGI" cluster calling for a synthesis in these views...

https://www.repository.cam.ac.uk/handle/1810/293033

https://arxiv.org/abs/2101.06110

https://facctconference.org is the major conference in the area. It's interdisciplinary -- mix of technical ML work, social/legal scholarship, and humanities-type papers.

Some big names: Moritz Hardt, Arvind Narayanan, and Solon Barocas wrote a textbook https://fairmlbook.org and they and many of their students are important contributors. Cynthia Dwork is another big name in fairness, and Cynthia Rudin in explainable/interpretable ML. That's a non-exhaustive list but I think is a decent seed for a search through coauthors.

I believe there is in fact important technical overlap in the two problem areas. For example, https://causalincentives.com is research from a group of people who see themselves as working in AI safety. Yet people in the fair ML community are also very interested in causality, and study it for similar reasons using similar tools.

I think much of the expressed animosity is only because the two research communities seem to select for people with very different preexisting political commitments (left/social justice vs. neoliberal), and they find each other threatening for that reason.

On the other hand, there are differences. An illustrative one is that fair ML people care a lot about the fairness properties of linear models, both in theory and in practice right now. Whereas it would be strange if an AI Safety person cared at all about a linear model -- they're just too small and nothing like the kind of AI that could become unsafe.

Your question seems to be both about content and interpersonal relationships / dynamics. I think it's very helpful to split out the differences between the groups along those lines.

In terms of substantive content and focus, I think the three other responders outline very well; particularly on attitudes towards AGI timelines and types of models they are concerned about.

In terms of the interpersonal dynamics, my personal take is we're seeing a clash between left / social-justice and EA / long-termism play out stronger in this content area than most others, though to date I haven't seen any animus from the EA / long-termist side. In terms of explaining the clash, I guess it depends how detailed you want to get.

Could be minimalistic and sum it up as one or both sides hold stereotypical threat models of the other, and are not investigating these models but rather attacking based on them.

Could expand and explain why EA / long-termism evokes such a strong threat response to people from the left, especially marginalised communities and individuals who have been punished for putting forward ethical views - like Gebru herself.

I think the latter is important but requires lots of careful reflection and openness to their world views, which I think requires a much longer piece. (And if anyone is interested in collaborating on this, would be delighted!)

The big differences arise in two areas: Politics and the question of AI timelines/takeoff speed.

The Responsible/Fair AI faction is political to the hilt, and on the leftist side of politics to boot. The beneficial/safe AI faction is non-political and focuses more on the abstract side of AI.

Another difference is in AI timelines/takeoff speed. The Responsible/Fair AI faction views takeoff as not happening and AGI more than 50-100 years away. The beneficial/safe AI faction views a hard takeoff as fairly likely and AGI only 30-50 years away.

[This comment is no longer endorsed by its author]
Comments5
Sorted by Click to highlight new comments since: Today at 10:51 PM

A thought about some of the bad dynamics on social media that occurred to me:

Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.

However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.

So some AI Ethics researchers reasonably feel like they're being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn't completely well-directed.

The vibe I usually get from posts by AI safety people is that fairness research is somewhere between useless and negligibly positive.

That's the average online vibe maybe, but plenty of AGI risk people are going for detente.

These are excellent answers, thanks so much! 

As more and more students get interested in AI safety, and AI-safety-specific research positions fail to open up proportionally, I expect that many of them (like me) will end up as graduate students in mainstream ethical-AI research groups. Resources like these are helping me to get my bearings.

Good luck!

(BTW there's been a big spurt of alignment jobs lately, including serious spots in academia. e.g. here, here, here. probably not quite up to demand, but it's better than you'd think.)