The stuff of science fiction
Science fiction is littered with stories of the rise of machines and their attempts to take over the world and dominate humankind. Take the 1927 German sci-fi film extravaganza Metropolis – widely regarded as one of the most influential films ever made. In which the ‘Maschinenmensch,’ or ‘Machine-Person,’ unleashes chaos throughout the city of Metropolis.
And what of the dominance of machines in the Matrix quadrilogy, enslaving humans in a virtual world to generate the power needed for their digital survival.
While it is possible to imagine how super-intelligent robots have the potential to harm their creators, it is also, with the proper controls in place, realistic to recognize their capacity to help up reach our human further into our potential.
Although most robots are acting in strictly controlled environments, such as factories, others, supported by human supervision, are exploring off-world environments. Just think of NASA’s Mars rover ‘Perseverance’ 200 million km away or the robot ‘DaVinci’ assisting distant surgeons with remote surgeries.
The potential for robotics to do good is only limited by our imagination and ability to make the right choices.
What are robots?
So what do we mean when we talk about robots? Well, according to computer scientists Stuart Russell, professor of computer science at the University of California, and Peter Norvig, director of research at Google, “robots are physical agents that perform tasks by manipulating the physical world.”
And while, showcased on YouTube, you may have seen robots performing impressive feats of dexterity solving Rubik cubes, running through woods and doing back flips, and even attempting the Running Man dance, they may not be quite as good as they seem. Even the most advanced robots fall over and need human intervention.
Indeed, somewhere in the background, a person usually controls them using a combination of a laptop and an Xbox controller. While not fully autonomous, such technology is still at the frontiers of robotics, pushing the boundaries of what is possible by trying to teach AI to “adapt on the fly to new situations,” says Chelsea Finn at Stanford University.
If robots, operating in the real world, don’t have well-designed hardware, they aren’t going to get too far. Sensors enable the robot by providing the necessary information to monitor a continually changing environment resulting from its own movement and events outside its control.
Sensors such as cameras, light sensors, and microphones passively observe what’s going on around them.
Active sensors, including sonar and radar, send energy out into the robot’s environment and monitor its reflection; including ‘structured light projectors’ that project grid patterns of light onto a scene and monitor how the lines bend and provide invaluable information regarding an object’s shape. Scanning lidars (light detection and radar) are typically used by autonomous vehicles, offering highly accurate distance information over a range of 100 m. Similarly, global position systems (GPS) are equally helpful in determining the robot’s location with a high degree of accuracy.
And finally, proprioceptive sensors that measure the state of the robot itself, such as battery charge, wheel and limb position, and speed – are essential for interacting with objects, the environment, and even humans. After all, as with humans, robots need to keep track of their motion, in this case the turns of their wheels, rather than feet, and actuators rather than muscles.
Robots are typically either built to resemble the human body in shape, limbs bolted to the ground and found in manufacturing, or mobile robots such as drones, vacuuming floors, flying outside delivering packages. or even roaming oceans.
For any of these jobs, robots must have ‘actuators’ – mechanisms that initiate the motion of effectors – the part of the robot that interacts with the environment and creates motion.
Actuators can be electrical, spinning up motors, or hydraulic, using pressurized oil or water, generating the mechanical movement needed for single or multi-axis joints. They include grippers, such as the parallel jaw gripper, that uses 2 fingers and an actuator to allow the robot to grasp objects, or 3-fingered grippers that provide improved flexibility and dexterity. And yet they both have their physical limitations.
The more advanced ’Shadow Dexterous Hand’ has 20 actuators and resembles a more humanoid hand; it has a talent for complex manipulation, even solving the 80s favorite, the Rubik cube.
Robotic problem-solving and decision-making
While sensing and moving through the environment are vital skills for robots, so too is the software that drives the agent, or autonomous AI software, toward its goals. Otherwise, it would be like us visiting a store with no clear plan for our weekly shopping. We end up with a lot of chips and very few vegetables!!
Sensor measurements must be mapped onto internalized representations of the environment. They must be well-structured, enable fast and clear updates, contain sufficient information for sound decision-making, and correspond well to the environment.
At times, robots must be cooperative, and, on other occasions, they must be competitive – with other robots and even humans. After all, if a robot is too polite in a crowded situation, it may never reach its goal. It can help to formulate the problem as a game, with proxy rewards set up that ultimately tie in with the user’s needs.
Robot planning and control
As with humans, robots need to plan how they will achieve the objectives set by their operators and identified from their interaction with the environment. To do so, they typically begin with ‘task planning,’ creating a high-level plan for the action to be performed and determining the robot’s behavior.
Achieving subgoals may require finding the path from one place to another and avoiding obstacles along the way – known as ‘motion planning.’ Getting around is essential yet fraught with difficulty. For that reason, it is sometimes referred to as the ‘piano mover’s problem’ – just imagine 3 movers attempting to get a piano up to a third-floor apartment and the amount of communication and coordination of movement required.
Robotic tools for robotic routes
To help with route planning, robots use a variety of tools and techniques. ‘Visibility graphs’ and ‘Voronoi diagrams’ are graphical representations of the environment and can help the robot find the shortest route between A and B and avoid obstacles.
‘Probabilistic roadmaps’ combine access to a collision checker with simple planning that monitors milestone completion.
Control planning and trajectory tracking work together to achieve the planned motion using the robot’s actuators, turning the mathematical description of the path into actions in the real world and ensuring the robot is on track toward its goal.
Moving with uncertainty
Life is uncertain. We can’t always predict the likely outcome of our actions or how the environment will change. For robots, such uncertainty is further heightened by partial observation of the environment and the hard-to-predict, or unmodeled, result of its actions.
Deterministic algorithms, ones that, given a particular input, always produce the same output, can be adapted to consider the ‘most likely state’ from the probability distribution that results from state estimation.
Ongoing, continual replanning then takes into account new inputs and beliefs that arise along the journey. Indeed information gathering actions can combine with ‘model predictive control’ to replan at every ‘step’ – or turn of a wheel.
Some movements, such as directing an arm, become ‘guarded,’ stopping when in contact with a surface or an object. The result is a series of such movements that ultimately guarantee the successful reaching of the target regardless of uncertainty.
Humans don’t know everything at the outset – they learn from feedback. ‘Reinforcement learning’ in robotics is invaluable when the AI does not have full access to a complete model of the real world.
For robots, unlike playing games such as chess or ‘Go,’ knowledge of the real world is vital to ensure that the robot’s actions are safe. AI researchers achieve this by both transferring policies that work in simulations into the real world and through ‘safe exploration.’ Learning slowly, without damaging themselves or the environment, is better than a crash-and-burn approach.
Crucially, the software must be able to recognize when it is succeeding and when it is failing and avoid causing harm directly or indirectly. After all, a factory robot that successfully paints the shell of a car may achieve its goal while inadvertently spraying the operator during the process.
Some experts believe the successful union between AI and robotics could mark a major step toward machines, achieving something like human-level general intelligence.
But why does it matter if the AI is disembodied, housed in a computer separate from its environment, rather than sitting in a physical entity able to move through, and interact with, its environment? The answer lies in something at which we excel: trial and error.
“Embodiment is a critical part of how humans and many animals learn, because it allows you to build and test hypotheses,” says Chelsea Finn, AI researcher at Stanford University in California. Finn believes that AI must learn to solve general-purpose tasks with minimal supervision to achieve the sort of ‘common sense’ knowledge of the environment we take for granted.
And yet, for others, embodiment is not the complete answer. Evolutionary psychologist David Geary at the University of Missouri believes an evolutionary, survival-of-the-fittest process is needed to weed out those robots that are less successful at conceptual abilities. Machines that fail at specific tasks or tests could be made obsolete, while those that are successful be developed further to improve on past achievements.
The impact of robots
Robotics is having a significant impact in a wide range of settings: home care, caring for older adults; healthcare, performing remote operations; services, delivering towels to hotel rooms; and exploration, particularly in harsh environments such as Mars, retrieving satellites in near-Earth orbit, or mapping sunken ships at the bottom of the sea.
Learning to perform complex manipulation tasks may be at the heart of future successes in using robotics in the real world. In 2017, a group of researchers attempted to build a robot that could solve the Rubik cube in the physical world using a single hand. Such dexterity, even for humans, is extreme, so researchers began by training the AI to solve the problem in simulation using a neural net before transferring the skills to an actual robot hand. Through training, it became so successful that it showed robustness, completing the task even when researchers attempted to interfere by pushing against the robot limit using an imitation giraffe.
Such dexterous abilities support life-saving in robot-assisted surgery. ‘Da Vinci’ surgical robots have now been cleared to perform multiple surgical procedures. They can be operated remotely, from anywhere on the planet – so long as it has a good internet connection!