Artificial intelligence

How writers explore robots and intelligent machines.

Isaac Asimov
Stephen Hawking

AI in science fiction

Science fiction writers have been exploring the topic of artificial intelligence for well over a century, delving into the potential consequences of creating sentient machines.


The topic has often been used as a metaphor for scientific advancement in general. A dangerous robot, which starts attacking humans, might symbolize the risks of building technologies we do not fully understand.

Artificial intelligence can also be used to explore what it means to be human. Philip K. Dick’s *Do Androids Dream of Electric Sheep?* imagines a future world where robots are hard to tell apart from humans. If an artificial being looks like a human, and thinks like a human, does that not make it a human?

The first robots


Samuel Butler’s *Erewhon* introduced the concept of artificial intelligence to science fiction literature. This book, written in 1872, describes machines evolving artificial intelligence through a process of natural selection.

This idea grew out of two events that took place in Butler’s lifespan: the industrial revolution, where machines began replacing human workers, and the publication of Darwin’s *On the Origin of Species*.

Fifty years later, in 1920, Karel Čapek wrote *R.U.R.* (Rossum’s Universal Robots). This play coined the term “robot”, derived from a Czech word meaning ‘forced labor’. The story revolves around robots created to serve humanity, who eventually lead an uprising against their creators.

These stories reflect the fears of human redundancy which were common around that time. If machines could replace us in certain jobs, could they one day replace us completely?

Laws of Robotics

After the groundwork was set by writers like Butler and Čapek, the topic of artificial intelligence was taken up by Isaac Asimov – one of the most influential writers in all of science fiction – in the 1940s and 50s.

In his *I, Robot* stories, he established the Three Laws of Robotics. These laws were programmed into every robot to make sure they became a positive force in society.


The First Law states that a robot must never harm a human. The Second Law states that a robot must always follow orders from humans, unless those orders conflict with the First Law. The Third Law states that a robot must preserve its own existence, unless that preservation conflicts with the First or Second Law.

Frankenstein Complex

As well as his famous Laws of Robotics, Asimov also wrote about the Frankenstein Complex: the idea that humans are intrinsically scared of robots.

Some people might fear that robots will replace them, making human beings redundant. Others might fear that robots will attack them, just like the monster in *Frankenstein*.


This idea is common in science fiction works, such as *The Matrix* (1999). In this movie, humans are enslaved by robots, after our own creations turn against us. An earlier example is Jack Williamson’s story, *With Folded Hands*, first published in 1947.

Williamson used robots as a metaphor for nuclear power. He said: “Technological creations we had developed with the best intentions might have disastrous consequences in the long run.”

Malevolent robots

In 1967, Harlan Ellison wrote *I Have No Mouth, and I Must Scream*. When it comes to stories about artificial intelligence, this one presents a terrifying, worst-case scenario.


In the story, a superintelligent artificial mind is tortured by its own existence. It blames its creators, and seeks revenge, using its vast knowledge to inflict psychological torment upon five human captives.

The story is a horrifying vision of the future, and warns us of the dangers of creating a being more powerful and intelligent than we are. It also asks an ethical question: if an artificial intelligence cannot consent to its own creation, is it wrong to force these beings into existence?

Benevolent robots

Science fiction writers have written stories about malevolent AI, but there are plenty of stories about benevolent robots too. These stories remind audiences that scientific advancement does not always need to be feared.


In *Star Wars*, C-3PO and R2-D2 are loyal and resourceful, and become integral members of the Rebel Alliance. In *Star Trek*, Data is a human-like android, and well-liked by other characters, despite lacking certain social skills.

These robots see the world differently to humans, and struggle to understand our irrational, emotion-driven lives. Their comments on human idiosyncrasies can make an audience more aware of their own oddities and quirks.

Benevolent robots have also been used as a metaphor for deeper themes. In one *Star Trek* episode, a court rules that Data is nobody else’s property, and is deserving of human rights.

Human-like robots


Some science fiction writers have blurred the line between robots and humans, and explored the ways we could tell the two groups apart.

In *Blade Runner* – the film based on Philip K. Dick’s *Do Androids Dream of Electric Sheep?* – the Voight-Kampff test is used to distinguish robots from humans. This polygraph-like test measures emotional responses, as robots in the story have difficulty emulating genuine human emotions.

In the film, some robots do not know they are robots until the moment they fail the test. This unsettling idea invites members of the audience to question their own humanity. If their own mind was artificial, how would they even know it?

AI in real life


Science fiction is rarely meant to be predictive. Its stories are metaphors, not forecasts. But occasionally, writers get things right, whether they were trying to predict them or not.

In the last few years, there has been a boom in artificial intelligence research, with companies releasing voice assistants, chatbots, and even humanoid robots.

Google’s AlphaGo caused a major stir in 2017, when it defeated the world’s best Go player. In 2022, ChatGPT rose to prominence, with its advanced language generation.

These are both examples of ‘weak’ AI: an artificial intelligence that learns to perform a specialized task, but is incapable of general thought. We have a long way to go before creating a truly conscious AI, like the ones in science fiction literature.

What happens next?


The future development of ‘strong’ AI is a contentious topic. Some experts argue that strong AI is unachievable; a robot will never have a conscious, human-like mind.

Even if it proves to be possible, many scientists have warned against it. In 2014, renowned physicist Stephen Hawking said: “Creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”

Other scientists believe that strong AI would massively improve our lives. These intelligent beings could develop solutions to cancer and climate change, and other problems which humans have struggled to solve.

Ultimately, there is no way to know how strong AI would affect the world. Science fiction has explored some possible futures, but only time will tell what really comes to pass.

You will forget 90% of this article in 7 days.

Download Kinnu to have fun learning, broaden your horizons, and remember what you read. Forever.

You might also like

Dystopian Societies;

How writers explore dark visions of the future.

Reading List;

Science fiction's most important books and films.

Space exploration;

How writers explore interstellar travel and empires.

Alien Contact;

How writers explore encounters with extraterrestrials.

Time Travel;

How writers explore time and parallel universes.

What Is Science Fiction?;

Defining the genre and its development over time.

Leave a Reply

Your email address will not be published. Required fields are marked *