Hide table of contents

I will admit that I have not done a lot of research into the topic and might be slightly biased as a Kenyan.

Edit: The tasks that are currently outsourced are data labelling NOT training of AI models.

This is just to ask whether anybody else is concerned about a tendency by major tech companies to outsource labour needs to the global south? The services outsourced are either training of AI models or for a longer period, content moderation of social media sites.

There are instances where this work is outsourced to countries such as Kenya or India. For reasons that I can only assume which include cheap labour, lax labour right enforcement and fewer regulatory restrictions in these countries. Similar to the reasons that guide the outsourcing of manufacturing jobs to the global south.
My general worry is that in future, the global south shall become the training ground for more harmful AI projects that would be prohibited within the Global North. Is this something that I and other people should be concerned about?

13

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

It is hard to make any confident claims when so much depends on the details. As a very simplistic example, a US-based firm could outsource manual content moderations to a Kenyan for a wage that is low by Kenyan standards (as described in the links you shared), or it could pay a Kenyan an wage that is excellent by Kenyan standards to do a "good" job.

I care about labor rights, and it is no surprise that some US-based companies pay small amounts of money to people in underdeveloped countries for unpleasant tasks. I dislike it, but it doesn't seem very tractable to me. Imagine how incredibly hard it would be for the US government to adopt a law that all outsourcing must comply with US labor law.

There is also the old argument that this job may be terrible, but people choose to do it because it is the best option available to them, it is better than the alternatives. Many young people from rural areas migrate to cities and work very unenjoyable jobs in factories because they find it preferable to staying in the countryside and being a farmer. I'm not fully convinced by this argument, but I do think that there is some aspect of it that makes sense: these people are choosing to do this job rather than other jobs.

What you have stated is entirely true. However, there are also contentions that at least in Kenya, Meta violated Kenyan labour laws in its use of a content moderation firm. Meta tried to claim rather unsuccessfully that they shouldn't be bound by Kenyan laws since they aren't based in Kenya. This can be found here and here, I think it is slightly indicative that the use of outsourcing companies is an aim to limit their liability in jurisdictions outside of the US. The case hasn't been concluded since it has been sent to mediation. It is what raised this qu... (read more)

It does seem bad if prohibited dangerous models could be trained in other jurisdictions, but the issues there - about regulating developers and datacenters - seem quite distinct from data labeling and the like, which seems to be the sorts of services that are currently outsourced in the way you described.

You're 100% correct. It is data labelling (an oversight on my part)

1
Aaron_Scher
Due to current outsourcing being of data labeling, I think one of the issues you express in the post is very unlikely: Maybe there's an argument about how:  * current practices are evidence that AI companies are trying to avoid following the laws (note I mostly don't believe this),  * and this is why they're outsourcing parts of development,  * so then we should be worried they'll do the same to get around other (safety-oriented) laws.  This is possible, but my best guess is that low wages are the primary reason for current outsourcing.  Additionally, as noted by Larks, outsourcing data-centers is going to be much more difficult, or at least take a long time, compared to outsourcing data-labeling, so we should be less worried that companies could effectively get around laws by doing this. 
Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 1m read
 · 
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal