Hide table of contents

Artificial intelligence (AI) is a collection of technologies that enable computer programs to act with higher levels of intelligence. From the recommender algorithm on YouTube to self-driving cars, AI can encompass any computer program which exhibits intelligent behavior.

The AI technologies we are familiar with today are properly known as narrow AI (or weak AI), since they are usually designed to perform a well-defined (narrow) task very well (eg. playing chess or recognising human faces). However, our current AI systems fail to generalise to more than a handful of tasks at a time. The goal for many AI researchers is to one day create a general AI (AGI or strong AI) program. That is, a single AI program which is capable of outperforming humans at nearly every cognitive task.

Recently AI research has made unprecedented progress towards ever more sophisticated AI programs. AI research organisations such as DeepMind and OpenAI have leveraged larger and more powerful programs to achieve astounding results in a range of domains.

Recently, OpenAI trained a very large language model, called GPT-3, on a huge amount of text from the internet and demonstrated some remarkable results including the ability to generate long, coherent portions of text [link]. DeepMind on the other hand trained a different model, called AlphaFold, on protein data and was able to make highly accurate predictions about the structures of proteins. An accomplishment which landed DeepMind a spot on the cover of Nature [link].

Breakthroughs like these demonstrate that AI research is making progress at a breakneck pace. Applications of AI which once seemed near impossible suddenly feel a lot more achievable in the not-so-distant future. And if the AI programs are designed to be more competent than humans, then there is a lot for humanity to be excited about. For example, self-driving cars have the potential to dramatically reduce the number of deadly road accidents [link].

However, some AI researchers have pointed out that unregulated progress in the field of AI poses some very serious risks. As we delegate more of our decision making to autonomous systems the stakes become a lot higher and the potential for undesirable outcomes, due to misuse or accidents, increases.

Accidental Risk

The accidental risks due to AI include all of the unintended risks that emerge when we build intelligent systems. Some of these risks stem from us building flawed AI systems that may fail in certain settings, e.g., a self-driving car that crashes [link].Other accidental risks are far more nuanced and can stem from us accidentally designing an AI with goals that are subtly misaligned with our intended objective. As an example, consider the recommender system on Facebook which was designed to maximise user engagement, but had the unintended consequence of fueling political polarisation in the US by prioritising inflammatory content [link].

The problem of making sure the AI’s objective is the same as the objective we intended the AI to have is known as the AI alignment problem [link]. AI Researchers have demonstrated in many video game domains how easy it is to unintentionally encode an objective into an AI that is subtly different to the one you intended [link]. There is a long list of AIs playing video games but learn to exploit poorly defined objectives in the games instead of solving the true goal in the game [link].

In the video game setting, such misalignments in our objectives and the AI’s objective may seem benign. But in the real-world, where the stakes are a lot higher, a misaligned AI may pose a deadly risk. Moreover, it is far more difficult to precisely define a desired outcome in the real-world than it is in a video game. Thus, misaligned AI are likely to be even more common when deployed in the real-world [link].

The consequences of building misaligned AI systems become more severe as the systems become more intelligent and we delegate more mission-critical decisions to them. Suppose, for example, that one day we build an AI system which can prescribe medication to sick individuals. Naturally we would make the objective of such an AI something like, “reduce the number of sick people in the world”. But what if he AI learns that it can reduce the number of sick people in the world to zero by simply prescribing deadly poison to anyone who is sick. Such a strategy would still achieve its objective, but it’s obviously not the outcome we were hoping for.

Suppose then instead we make the objective to “reduce the number of sick people” while also adhering to the Declaration of Geneva, i.e., the physician’s pledge which was adopted by the World Medical Association in 1948 [link]. In the pledge it is stated that, “I will maintain the utmost respect for human life”. Subject to these constraints, surely an AI system could not possibly do something unintended, right? Well, firstly, it is not clear how we could encode such vague constraints in a way that a computer program could understand. But that technical challenge aside, how can we be certain that a super intelligent AI system still won’t find some undesirable loophole in our constraints?

Well, then maybe we should just build in a kill-switch which we can flip if ever the AI system starts behaving in a dangerous way. Surely we have now solved the AI safety problem. Well, if the AI system is sufficiently intelligent is there not a chance that it could learn that it can be turned off and if it’s off it can not achieve its objective. In that case it is in the AI’s best interests to not be turned off and so it may learn to evade our attempts to shut it down [link]. Thus, a run away super intelligent AI may be possible.

What makes this prospect even more terrifying is the potential for AI systems to start self-improving. If we one day build an AI which is more intelligent than us, then it seems possible that such a system could build an even better version of its own intelligence. Moreover, computers can perform computations a lot faster than human beings. Thus, we can expect that there could be an intelligence explosion and we may simply not be able to keep up. Such an outcome may pose an existential risk to humanity, i.e., it has the potential to possibly wipe us out.

I admit that this scenario may sound like something out of a Hollywood movie like the Terminator, but there are many prominent scientists and philosophers who believe this is a risk worth worrying about. Toby Ord, a prominent philosopher declared in his book The Precipice - Existential Risk and the Future of Humanity [link] that he believes AI poses the greatest existential risk to humanity, above nuclear war, pandemics and climate change.

Stuart Russell, a prominent AI researcher, recently also wrote about AI safety in a book he titled Human Compatible [link]. In it he explores the many potential risks due to AI and how we can try to mitigate them. Russel also spells out the many ways humans have already found to misuse AI.

Misuse Risk

The class of risks due to misuse of AI technology is the second class of AI risk. The clearest example of such risks, that I can think of, is autonomous weapons, eg. autonomous drones. There are serious concerns that these weapons could usher in a new, deadlier age of warfare [link].

We have also begun to see governments and corporations misuse AI technology to exploit groups of individuals. For example, the Chinese government uses face-recognition software for mass surveillance [link]. On the other-hand , Facebook has been implicated in using its systems (which are enabled by AI) to harvest data about its users without consent and sell the data for a profit [link].

Some people are worried that the race for more powerful AI systems is already akin to the nuclear arms race [link]. And that data (to train these systems) has become the new oil, i.e., countries and corporations are racing around the world to extract more data for a profit [link].

Facebook, for example, has become synonymous with the internet in many developing countries because Facebook made deals with governments to provide the internet infrastructure to their countries in exchange for making Facebook the default way for users to access the internet [link]. This access to data gives Facebook unprecedented global power and we have already seen how this has had unintended devastating consequences for such countries. In Myanmar and Sri Lanka for example, Facebook’s AI recommender system helped spread fake-news and ultimately fueled deadly violence [link]

As another example, China has begun selling its mass surveillance technology to developing countries such as Zimbabwe. In exchange for the technology and infrastructure, the country’s government must agree to share the surveillance data with China [link]. China stands to benefit from this because face-recognition software is notoriously bad at distinguishing African people’s faces [link]. But access to mountains of data from Africa could give Chinese companies an edge over their western competitors.

But there are some serious concerns about other ways the data and AI surveillance technology could be abused in Africa. Sophisticated AI-surveillance could help authoritarian regimes crack down on any opposition [link]. Some critics believe that selling AI-surveillance technology to authoritarian regimes is a bit like selling them high-tech weaponry. However, the spokesperson of Chinese firm CloudWalk responded by asserting that: “Just as the United States sells arms around the world—it does not care whether other governments use American weapons to kill people” [link].

Structural Risk

There is a third category of AI risk which I believe is the most likely one to impact Africa and other developing regions of the world in the near-to-medium term future. The category is called structural risk and includes all the ways in which AI technology may shape the broader environment in ways that could possibly be disruptive or harmful to society and the economy [link].

The clearest example of structural risk due to AI, that I can think of, is the potential for many jobs to be lost due to automation. Many African countries have a huge unemployment problem. In South Africa, one of Africa’s largest GDPs, the official unemployment rate stands at 32,6% [link] and there are serious concerns that automation could make this crisis worse [link].

The industrial revolution replaced man-power with machines since machines are stronger  and more reliable than humans. This caused massive structural changes around the world and many people in the working class lost their jobs. Initially this caused a lot of civil unrest as . It’s not hard to imagine a future where AI technology may cause similar job losses. But this time not even the white-collar workers are safe as AI programs learn to do even more complex tasks [link].

But to be fair, the industrial revolution did ultimately usher in the most prosperous era in human history [link]. The wealth that has been generated has lifted the vast majority of people on earth out of severe poverty [link]. It is true that the wealth has not been distributed fairly, with some countries a lot richer than others, but on average humanity has benefited from the industrial revolution.

The delegation of repetitive manual labour to machines freed up a lot of time for humans to spend on other more valuable tasks. Jobs which were made obsolete by machines were eventually replaced with new jobs which no one could have imagined would exist at the start of the revolution. Today there are far more types of jobs than there were at the start of the 20th century. I am sure no one living in 1910 could have guessed that “influencer” would become a serious job title.

Could the AI revolution possibly follow a similar trend? Will there initially be widespread job losses and civil unrest but ultimately AI technology will create so much value that everyone in the world will, to a greater or lesser degree, benefit? Open AI’s co-founder and president, Sam Altman, certainly believes AI will make the world very wealthy [link]. However, he emphasises that if public policy fails to keep up, as power keeps moving from labour to capital, then most people will end up worse off than they are today [link]. Making sure we tax the right assets will be crucial to ensure that the wealth generated from AI is redistributed fairly.

There is a real possibility that the AI revolution will be like an amplified version of the previous industrial revolution whereby a tiny slice of the world’s population, which controls the AI technology, will benefit tremendously while everyone else is left behind. It is difficult to imagine what roles human labour will fill if every task can be completed more efficiently, accurately and cheaply by a machine/algorithm. No computer program is going to strike for higher wages, demand benefits or take sick days. So, if your primary objective as a corporation is to maximise profits, you are very likely to opt for autonomous machines rather than fickle human beings [link].

Currently the most powerful AI systems are controlled by US and Chinese mega-corporations like Google, Tencent, Alibaba, Microsoft, Facebook and Amazon. These companies are among the most valuable companies in the world and they keep growing at an unprecedented rate [link].

I am concerned that this concentration of AI-power in the hands of profit seeking corporations from industrialised countries could usher in another wave of extractionary business and governmental relations with developing regions like Africa. Throughout history countries with a technological advantage have used their power to extract wealth from developing regions. Think of the trans-Atlantic slave trade in the 18th century and then the rush to extract oil from Africa in the 20th centrury. As discussed earlier, we are already seeing US and Chinese corporations using their technological advantage to engage in questionable business deals with African countries and extracting data with their AI-powered technologies [link].

Mitigating Risk

Mitigating AI Misuse in Africa

To mitigate the risk of further exploitation, governments from Africa and other developing regions need to prioritise the implementation of AI and data security policies. Ideally policies should be multilateral, such as the African Union’s Convention on Cyber-Security and Personal Data Protection [link], since it is far more effective for a block of countries to enforce policies than for individual countries to try and protect themselves from exploitative powers. For example, the European Union has had some significant success in fighting back against tech-giants such as Facebook [link].

Unfortunately there is still much work to be done in Africa. The African Union’s Convention on Cyber-Security and Personal Data Protection has only been signed by 14 out of 55 countries and ratified by 8 [link]. Moreover, in Africa there is a large gap between ratifying a new piece of legislature and enforcing it, since enforcing it can be expensive.

Moreover, in Africa we need to prioritise the acquisition of AI literacy. AI literacy is the knowledge that policy-makers and individuals need to make informed decisions around AI technology and their data. In many cases, due to a lack of internet access, there is a very clear “AI Digital Divide” in Africa [link].

Finally, in Africa we also need to prioritise the acquisition of AI skills. This means that schools and higher education need to prioritise the training of skills relevant to AI technology so that local industries in Africa are ready to take full advantage of emerging AI technology to help spur economic growth [link].

If the risks posed by AI are managed properly in Africa and the rest of the developing world, then we stand to benefit tremendously  from the emerging transformative technologies. I am generally an AI optimist and believe that AI technology has the potential to solve many real-world problems and dramatically improve the well-being of everyone. There is a strong case to be made for the hope that AI will be an opportunity for growth, development and democratisation in Africa [link].

But in order for this hope to be realised we need our governments to recognise the potential risks and to start working on strategies that will help ensure the interests of African people are protected against exploitative forces wielding powerful AI technology.

Mitigating Accidental AI Risk

To mitigate the accidental AI risks some researchers are calling for radical changes in how AI research is conducted. In Human Compatible Stuart Russel proposes a new framework for AI researchers which addresses the alignment problem. Russel proposes that AI programs should learn human preferences rather than how to maximise objectives [link]. New approaches to research like this will hopefully help pave the way to safer AI.

Finally, policies to regulate this rapidly developing field need to urgently catch-up. Globally, governments are struggling to keep up [link]. This means that large corporations have been able to forge ahead, making ethical decisions that affect all of us, without much oversight [link]. This problem needs to be addressed by attracting more smart minds into the field of AI policy and regulation. Solving this challenge will require a diverse set of skills spanning countless domains such as law, psychology, neuroscience, ethics and politics. Figuring out how to develop safe and fair AI technology is likely to be the most important problem humanity will ever solve. Thus, it is far too important to be left to AI researchers alone.

18

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.