The rapidly advancing field of AI is challenging our understanding of what it means for a machine to display human-like intelligence
Just imagine for a moment
Take a moment, and think about the amazement the audience must have felt in Alexandria over 2000 years ago when Ktesibos showed off the first self-regulating machine – a clock able to control the flow of water.
Let’s skip forward to the staggering manufacturing advances of the 1700s, with Jacquard’s loom which would read punched paper to automate the patterned creation of textiles. And then, imagine being there in 1936, as genius **Alan Turing** lays the theoretical groundwork for the ‘Turing machine’ capable of implementing any computer algorithm; and in his groundbreaking 1950s paper he poses the question, “Can machines think?”
And now, the present day. With AI giants such as **DeepMind**, we are finding a way to improve cancer diagnosis, reduce energy use, cut carbon emissions, prevent sight loss, and improve voice synthesis. We are beginning to answer the questions that have eluded humanity for so long and understand the potential of granting intelligence to machines.
It’s not all game shows
But what do we even mean when we talk about intelligence? To answer the question, we must look beyond the pattern finding and word matching skills of IQ tests. According to cognitive researcher Rosalind Arden from the London School of Economics, intelligence *“reflects a broader and deeper capability for comprehending our surroundings”* and figuring out what to do. This means that intelligence is embedded in our actions and our environment.
And what researchers call **‘general intelligence’** is something at which humans excel. We learn from experience, changing in response to the information we receive. And yet we aren’t unique – other animals share virtually all of our cognitive abilities, says Arden. An octopus can solve puzzles, and an antelope, standing in the African Savannah, can select the most nutritious grasses with incredible expertise. Intelligence should not be thought of as uniquely human; it can be found in other forms.
What is AI?
AI means different things to different people. It typically refers to building **human-like behavior into a machine or system** – usually, a digital computer driven by human-defined algorithms that store, retrieve, and process data.
We aim to use computers to mimic the capabilities, or the ‘what,’ of the human mind for problem-solving and decision-making, but not necessarily the ‘how.’
Over time, it has become more widely accepted that rather than focusing on developing machines that think and act ‘like’ humans, a better approach is to create systems that think and act rationally.
AI uses theories taken from **cognitive science and computer science**, combining complex algorithms and robust datasets to enable problem-solving. And the take-up for AI technology is advancing rapidly. Indeed, according to Forbes in 2021, the use of AI in many sectors has increased by a massive **270%** over the last 4 years. With the proliferation of expansive datasets, increases in processing power, and decreasing prices of technology, machine learning is proving a powerful ally for business and research.
The rewards are big
While AI has significant challenges, the rewards are out there, and they’re not just financial. AI may just have the power and even the creativity to **solve some of our biggest challenges**. Indeed, when Lee Sedol, legendary player of the ancient Chinese game ‘Go,’ lost all matches in a best of 5 tournament to an AI, he commented that at one point, he was dumbstruck – “this move was really creative and beautiful.”
Experts at DeepMind are taking AI out into the real world and using creativity to solve one of the toughest challenges in biology – accurately predicting the shape of proteins from their sequence. This is crucial not only for medical research but also for creating proteins efficient at breaking down waste plastics or creating renewable biofuel.
AI has the potential to create trillions of dollars of value across the economy, unlock whole new branches of scientific endeavor, and uncover the secrets of how life works.
AIs biggest challenge?
Humans are capable of tackling almost any previously unseen situation.
Imagine walking into a new restaurant. We can walk around without knocking over tables, interact with the waiter, and choose from an unfamiliar menu, all while maintaining a conversation with our date.
AIs are typically **specialized**. They may be good in a single area, solving a specific problem, but, once outside, they are lost. Future AI needs to transfer learning from existing domains to new and novel ones.
Indeed, according to Ross Anderson, researcher and professor at the University of Cambridge, AI is about **statistical machine learning**, often using large amounts of crowdsourced data. The AI processes the information, looking for patterns, and assesses against human-given goals.
We are terrific at teaching machines to play games, set insurance premiums, and populate a Facebook feed with scarily relevant ads. And yet, ordering an over-priced chicken salad while impressing a dinner date with witty banter is currently out of scope for the machines.
Generalized learning
Co-founder of Apple (no, the other guy), Steve Wozniak came up with a novel test of whether machines can be said to have true generalized learning. And it’s remarkably mundane – at least for us evolved homo sapiens.
**The day you can get an AI to walk into any American household and make a cup of coffee, you have generalized intelligence**. After all, a lot is happening underneath this apparently simple act. As intelligent agents, we must navigate an unknown environment, use unfamiliar equipment, and not trip over the dog.
And yet, Stephen Cave at the Center for Future Intelligence at Cambridge University, offers us some words of warning when judging their abilities through anthropomorphic eyes. The way an AI works is nothing like that of the human brain, including how they tackle goals, handle limitations, and in their capacities. As a result, they will perhaps always *“be profoundly different to us large-brained apes.”*
What are the tools of AI?
According to Peter Norvig, who’s the director of research at Google, AI is about building machines that act intelligently, using machine learning to gradually improve their accuracy while imitating the way humans learn.
Found in early research, including Robert Nealey’s ‘checkers master’ in 1962, machine learning remains dominant and widespread, present in Netflix’s recommendation engine, and integral to autonomous self-driving cars.
**Statistical models** are used in machine learning to train algorithms to classify, predict, and uncover insights in data mining projects and can be used to power decision-making.
And yet, teaching an AI chess using supervised learning would require a database containing vast numbers of winning moves – but it would never be enough compared with the larger number of all possible moves.
So researchers often turn to deeper level cognition, such as **reinforcement learning** that relies on trial and error, unsupervised neural networks looking for useful features, and transplanting learning from one domain to another.
The problem of explainability
AI is never static. It is a continually evolving set of techniques and technologies – more so now than ever. As a result, many questions surround its implementation: Can we fully explain what the algorithms are doing? Why is it making such choices and predictions? Are there any biases in the data?
Without knowing how AI solves its challenges, we can’t fully comprehend or trust its results, and we are left with a big problem – **explainability**. If AI rejects a loan application or advises a medical procedure, and we don’t know why, do we allow it to continue? Can we take that risk? Do we even know what biases are creeping in?
The answer can be found in deep ethical constraints involved in the creation of AI agents and regulation of their use. For reference, ‘**agent**’ is the term used to refer to AIs that perceive the environment and make autonomous decisions in order to achieve their goals. After all, most of us don’t know enough about medicine to validate its safety or efficacy, and yet we know trusted independent third parties are providing their assurance.
DeepMind states, “*we want AI to benefit the world, so we must be thoughtful about how it’s built and used*.” Otherwise, how can we benefit science and humanity equally?
Where are we now, and what’s next?
This is an exciting time for AI. Its potential is most likely beyond our imagination.
Companies are sending AI-equipped machines out into the real world, such as drones and self-driving cars, to increase the speed at which they can gather data. And researchers are creating virtual environments where almost unlimited trials can be run. One day, data will no longer be an issue.
Humanoid-like robots, such as **Atlas**, have shown their ability to overcome obstacles and walk on uneven ground. And, further afield, NASA’s remote agent program is generating plans from human-set goals to help rovers on Mars detect and recover from problems.
Elsewhere, **machine translation now covers the native languages of 99% of the world’s population**, handling hundreds of billions of words daily. Assistants from technology giants such as Amazon, Google, and Apple are now answering questions and carrying out tasks based on verbal requests.
And yet this is only part of the story and only a tiny sample of the potential for AI. The most significant impacts may result from the alliances being formed between intelligent computer agents and researchers, exploring relatively untapped pools of data, to understand the nature of life itself.