What Risks are posed by AI, and What Steps Can be Taken to Ensure Its Safe and Ethical Use?

While AI offers great potential for solving some of humanity’s greatest challenges, it brings with it a unique set of risks

The creators of AI

Conscious machines?

We’ve all seen movies where machines become conscious and take over the world, and yet do we even know what we mean by ‘consciousness’? Despite thousands of years of asking the question and presumably over 7 billion people exhibiting self-awareness, we are still a little shady on the feeling of life itself.

According to Christof Koch, neuroscientist at the Allen Institute for Brain Science in Seattle, consciousness emerges from the physical nature of the nervous system and will not be found in *“sophisticated look-up tables with superhuman capabilities.”*

For him, machines can never possess this essence, so they are prevented from being conscious or smart in the way we are.

And yet, other researchers and thinkers are far less downbeat regarding creating consciousness on a machine. Michael Graziano at Princeton University thinks we can do so by taking a more direct approach. His ‘attention schema hypothesis’ sees consciousness as the brain’s simplified model of its own workings. He believes it is possible to endow a physical object with self-awareness and build a machine with self-reflective capabilities. Only time will tell.

Will we lose out to survival of the fittest?

In 2014, theoretical physicist, and all-round genius, Stephen Hawkings told the BBC that *“the development of full artificial intelligence could spell the end of the human race.”* It’s a grim outlook, but it is ‘possible.’ After all, if AI took off at an ever-increasing rate, it could surpass the relatively slow evolutionary changes humans exhibit. Could we become obsolete?

Hawking is referring to the idea of **singularity,** the time when machine intelligence exceeds our own. He takes an almost Darwin-like ‘survival of the fittest’ stance, suggesting that such a powerful AI would seek to eradicate us instead of existing alongside us. And there is logic to the argument. Humans have become the most dominant species on the planet, often using our intelligence to mistreat those that are faster and bigger than us.

Could our fate depend on the goodwill of the technology we have indeed created? Are we in fact subject to the same survival pressures and risks as any other species on the planet, where only the strongest ultimately survive?

Approaching singularity?

There is still a question around whether more and more powerful computers will ever achieve greater-than-human level intelligence. After all, according to Moore’s law, the number of transistors on a microchip doubles every 2 years though the cost of computers is halved.

And yet it’s not all down to computing power – improving intelligence is not as easy as we first predicted. We are good at picking off the low-hanging fruit – tasks that are a good fit for machines. However, even after improving their intelligence an infinite number of times, it may remain bounded, suggests Toby Walsh, professor of AI at the University of New South Wales, Sydney, Australia.

AI may not have the will or the ability to dominate us. The tipping point, known as ‘singularity,’ when AIs become self-aware and evolve beyond our control, may never happen. And even if they outsmart us, with suitable rules and controls in place, they should become our allies, not our masters.

Fake news

**AI preference algorithms** used by social media giants on apps such as Facebook, Twitter, and Instagram utilize data gathered about us to steer our preferences and have, at times, been used to exploit and reinforce prejudices, biases, and preferences.

After all, in 2013, David Stillwell and colleagues showed that their machine learning techniques made it possible to predict someone’s personality simply by analyzing their Facebook likes. And we can be hacked. Now at the Psychometrics Centre at the University of Cambridge, his latest research has shown that tailored versus untailored advertising led to a 40% greater likelihood of clicking on the online advert.

By now, most of us are aware that we are being influenced and potentially targeted by ‘fake news.’ But what can be done?

**Governments have forced Facebook and other social media owners to tighten their third-party data use policies**. And that’s a start, but we still need to put in place stronger regulations and oversight into what data are collected, who they are given to, and how they are used. But we should also pause for thought when it comes to what we do online.

How do we take back control?

As AI spreads, we need to put in place more rules. And it’s been a long time coming. As far back as 1942, science fiction writer Isaac Asimov set out 3 rules for robots to abide by: they should not harm human beings, they must obey human orders, and, when not violating the first 2 laws, they should protect their existence. However, in his short story, Runaround, the robot gets stuck in a loop trying to obey laws 2 and 3.

But now, 80 years on, we are living the science fiction of the past: we have self-driving cars on the road, and social media feeds managed by AI. How do we take, or perhaps regain, control?

Much of the answer lies in knowing why AIs make the decisions they do and holding them (or their owners) accountable when they screw up or introduce bias. And governments around the world are stepping up. The European Union’s General Data Protection Regulations landed in 2018, boosting our rights regarding how our data are used and the automated decisions that impact our lives. When things go wrong, we need to know how and why.

How science and research can lead AI governance

Organizations such as the Alan Turing Institute are attempting to redress the balance regarding the potential for misuse of AI by managing the risks associated with its development. They believe that data science and artificial intelligence will change the world for the public good while recognizing the need for strict control, governance, and oversight.

For example, Adrian Weller is researching advancing AI capabilities while ensuring they are used safely and ethically to benefit individuals and science. And the work of Morgan Briggs explores the overarching theme of responsible research and innovation, including promoting children’s rights in relation to AI.

An ongoing project run by the institute is exploring how to use social science theory to attempt to address social bias when introducing new AI and computing techniques and tools. **Using research and theory from psychology, sociology, and politics may help us stop, or at least reduce, existing biases and outline priorities for future research and governance**.

What are the creators of AI doing to protect us?

DeepMind stands at the forefront of AI research, developing models and techniques to help AI become one of humanity’s most valuable inventions.

And yet, they recognize the risks, “*AI can provide extraordinary benefits, but like all technology, it can have negative impacts unless it’s built and used responsibly*.” While researching AI and developing solutions to the technical difficulties they face, they are working to anticipate risks and find ways to address them before they come to fruition.

Indeed, they, and others in the AI community, are so committed that they have made very public pledges against their technology being used for lethal autonomous weapons. They also work with leading research groups, including Google, OpenAI, and the Turing Institute.

All tools carry risks. For example, it is hard to imagine a world without being able to make a car journey along a freeway to a destination of our choice. We don’t resent the safety standards adhered to and the equipment protecting our own and others’ lives.

AI can be the same, but it must begin where it originates – and continue throughout its lifecycle.

The risk of AI bias

AI knows a lot about us. After all, it has data on our shopping, social media, and browsing history.

And yet, what AI learns is only as good as the data it is given. If the information isn’t good or contains obvious or subtle bias, it can adopt rules that exacerbate the existing problems of an unequal and discriminatory society.

AI is increasingly used in business to increase efficiencies, offer customer insights, provide competitive advantages, and personalize customer experiences. To ensure bias does not exist, or is at least reduced as a far as is currently possible, businesses must never lose sight of the ‘why’ that sits behind automated decisions and predictions.

**Explainability** involves building AIs with processes and methods that enable human users to comprehend and trust their results. Companies heavily invested in AI software, such as DeepMind and IBM, support accuracy, fairness, and transparency in their software and help organizations adopt responsible approaches and avoid, or remove, racial, ethnic, cultural, religious, gender, and other such bias.

The danger of propagating bias

AIs risk perpetuating biases and prejudices found in the real-world data they use to learn.

When Andrew Hundt from the Georgia Institute of Technology used OpenAI’s neural network CLIP, he wasn’t expecting to create a racist AI. And yet, when trained on passport-style images of people of different ethnicities and genders, researchers found it associated black males with the concept ‘criminal’ and black and Latina women with the term ‘homemaker’.

And it doesn’t stop there. When OpenAI performed its own audit, it confirmed that CLIP is susceptible to inheriting biases from earlier versions of the software and the data from which it learns. It seems extreme care must be taken regarding how we teach AIs and regular testing put in place to identify the ease at which they pick up on the worst biases found in society.

**It is equally crucial that training datasets are representative of all ethnic groups**. A 2018 review identified that AI facial recognition training datasets are overwhelmingly composed of lighter-skinned individuals, increasing misclassification.

You will forget 90% of this article in 7 days.

Download Kinnu to have fun learning, broaden your horizons, and remember what you read. Forever.

You might also like

What is Computer Vision, and What are its Real-World Applications and Potential for Misuse?;

For AI to successfully interact with, and learn from its environment, it must be able to ‘see’ what’s going on

Introduction to AI;

Start here for an overview of this exciting pathway

What is AI, and What are Its Potential Rewards?;

The rapidly advancing field of AI is challenging our understanding of what it means for a machine to display human-like intelligence

What is the history of Artificial Intelligence (AI)?;

The potential of AI to transform what we mean by intelligence was recognized during the earliest days of computing

Machine Learning;

Machine learning has made AI more flexible and adaptive, able to modify its behavior in response to its learning and the environment

What is Knowledge Representation in AI?;

Without an accurate and helpful representation of its environment, the AI simply has a collection of abstract data

Leave a Reply

Your email address will not be published. Required fields are marked *