AI Ethics
Plagiarism
In this pathway so far, we've looked at lots of different ways that Artificial Intelligence is changing the modern world. It's never been easier to find patterns in data, to generate content, to automate processes, and so on.
But here's the thing: a lot of people are actually unhappy with these changes. They've raised questions about the ethics of AI: should we really be using these technologies? Maybe the rise of modern AI will have a negative impact on the world.
Over the course of this tile, we'll be taking a look at these ethical questions in more detail. It's important to stop and do this sometimes. To step back from the hype, and think about AI from a slightly different angle.
The first ethical concern we'll be taking a look at is ownership.
We already touched on this subject earlier, when we learned about Generative AI. Essentially, whenever an AI company trains a model, it requires a lot of data. And sometimes, this data (e.g. artwork, text) is scraped from the internet without asking for permission from the creators.
In 2023, Getty Images launched a lawsuit against Stability AI. Getty claimed that Stability had used 12 million of their licenced images to train a Text-to-Image model, all without Getty's consent.
Similar cases have also been made against Large Language Models. The New York Times launched a similar lawsuit against OpenAI, also in 2023. OpenAI supposedly used copyrighted online articles while training ChatGPT.
Is the use of copyrighted content in training data actually against the law? AI is still such a new technology, that it's all a bit of a gray area. Those court cases we mentioned will be pretty important in setting a precedent for the future.
Putting aside the legal question, though, some people argue that this practice just isn't very ethical. If someone produces a painting, or a novel, or any other piece of creative content, shouldn't they get to decide how it's used?
That's what a lot of creators are saying. In 2023, an artist by the name of Eva Toorenent said this to the BBC: "If I'm the owner, I should decide what happens to my art."
An AI company might make the argument: human creators are constantly taking inspiration from one another. An artist might go to a local gallery, encounter a painting... and if they like it enough, they might shift the way they produce their own works of art.
Is it different, then, to show a piece of art to a neural network, and let it be inspired too? If it is different, then why is it different? Again, it's a bit of a gray area, and you can make arguments for either side.
At the end of the day, this is all uncharted territory. The AI industry is moving so fast that it's forcing us to think about ownership, inspiration, copyright and consent in ways that never would have crossed our minds in the past.
Misinformation
Artificial Intelligence has a problem. Over the last few years, it's been repeatedly involved with the spread of misinformation.
Here's the thing. AI systems are only as good as the data they're originally trained on. If the data contains any mistakes, or falsehoods, they can be replicated by the AI. Scientists call this GIGO: 'Garbage In, Garbage Out'.
For example, if an AI was trained on a dataset of random online articles, and a few of those articles happened to claim that the Earth is flat... there's a chance the AI will regurgitate that view.
Bias is also a problem. Imagine a model which helps companies to hire new staff. If it's trained on data from previous hires, and those hires favored a certain race or gender, the AI is likely to have those biases too.
Remember: when an AI produces misinformation, or biased outputs, it has no idea that it's doing it. It has no sense of 'truth' or 'prejudice' – it's just replicating patterns in the data.
Experts call it hallucination, and it can happen in all kinds of contexts. From misdiagnosing medical conditions, to referencing sources that never existed, to producing images like the one you can see below.
In 2022, a chatbot being used on Air Canada's website started promising discounts to passengers. These discounts weren't actually real. The chatbot was just hallucinating.
But in 2024, a court decided that Air Canada had to honor these discounts. It's put some companies on red alert – it's a risky game to use an AI model if you're held accountable for its mistakes.
A major problem with hallucination is that it's hard to know where the mistakes and biases even came from.
When a neural network 'makes a decision', it will often involve a chain of parameters that might be millions or even billions of steps long. And none of those steps are labeled. There's no way for a human to check where the model veered off course from a 'truthful' output, and found itself spouting misinformation instead.
That's why a lot of AI experts are now pushing for explainability. This means building models with decision making processes that are more visible and easy to understand.
That's easier said than done. It would require a rethink of our current approach to neural networks. But if it could be done, it would make it much easier to track down (and fix) any problems with hallucination.
With hallucination, AI models can spread misinformation by accident. But they're also a very powerful tool for people who want to spread misinformation on purpose.
You might have heard of deepfakes. These are AI-generated videos and images that make it look like someone is doing something that they never actually did. They’re called deepfakes because they use 'deep' learning models to 'fake' this content in a very realistic way.
For example, on the eve of an important election, a deepfake might show a famous politician kicking an innocent puppy. If this deepfake went viral, it could potentially be seen by millions of people, and change who those people vote for.
One thing is certain: in the age of AI, we can no longer trust our own eyes.
Weaponization
AI is a tool, and in the hands of the wrong people, that tool can be extremely dangerous. Last time, we talked about deepfakes. But another example is the recent rise of lethal autonomous weapons (LAWs).
LAWs are advanced military systems that can be trained to track down human targets, and potentially even kill them. For example, the US military's Replicator Initiative is a cutting-edge plan to build intelligent swarms of unmanned, weaponized drones.
As things stand, most LAWs still rely on a human to make the final decision to attack. But that could change in the near future. It's why a lot of people refer to LAWs by a different name: 'killer robots'.
LAWs bring to mind the I, Robot stories, written by Isaac Asimov way back in the 40s and 50s. We've talked about Asimov already in this pathway. He's the guy who came up with the Frankenstein Complex – that's the idea that humans are instinctively scared of robots.
Asimov's stories featured fictional robots, which were always programmed with a set of pre-defined rules. He called them the Laws of Robotics. They helped to make sure that these fictional robots would always be a positive force in society.
The First Law of Robotics stated that a robot must never harm a human. In no context were they able to break this rule. It was fundamental to the way these robots behaved.
In the days of Asimov, the idea of robots harming humans was purely hypothetical. But now it's an essential ethical question that urgently needs to be resolved.
If you asked most scientists, they wouldn't want AI to be used for weapons. In 2015, several thousand AI researchers signed an open letter calling for a ban on lethal autonomous weapons. But as things stand, these weapons are still being developed.
That letter calls it a third revolution in warfare. First, we had the invention of gunpowder. Then we had the invention of nuclear weapons. Now, we have the invention of AI weapons – and warfare might never be the same.
Energy demands
There's something we haven't really talked about yet. The world's most powerful AI models demand a lot of computing power.
Because of this, these models can't be hosted by a single, individual computer. Instead, they need to be hosted by something called a data center.
A data center is like a giant warehouse full of powerful computers and servers. Working together, these computers can process vast amounts of data. It's the only real way to run a model (and train a model) as powerful and complex as something like ChatGPT.
But here's the thing. These data centers use vast amounts of energy. And that raises another ethical question: how bad is AI for the environment?
As things stand, approximately 2% of global energy production is used to power data centers. But by 2030, this number could rise as high as 4%.
This isn't just down to AI. Data centers are also used to host websites, apps, and so on. But the rapid growth of the AI industry is definitely the driving force.
According to a recent estimate, the total energy demands of AI models is doubling every 100 days. By 2030, the AI industry is likely to be using more energy per year than countries like Iceland and the Netherlands.
In 2024, Shaolei Ren – a professor of computing in California, USA – actually worked out how much energy it took to run a single ChatGPT query.
Supposedly, if you ask ChatGPT to generate a 100 word email, its data center uses the same amount of power as it would take to power a lightbulb for fourteen hours.
It's worth pointing out that data centers also use other resources. Water, for example, is used to keep all these powerful computers cool.
According to Shaolei Ren, that same 100 word email would also use the equivalent of one bottle of water.
To sum things up: Artificial Intelligence is extremely resource intensive. And that might not be great for the future health of the planet.
Of course, AI models can also be good for the planet. Imagine, for example, a model that studies millions of houses, then finds patterns that help us to develop a new type of energy efficient home. Or a model which studies weather patterns, then helps us collect water more effectively.
But it's hard to know whether these benefits outweigh the resource demands of these models. It's yet another question that needs answering: is Artificial Intelligence bad for the planet or not?
Citizenship & rights
Here's another ethical question: should AI models have rights?
Earlier, we talked about Sophia – a humanoid robot developed by Hanson Robotics. In 2017, this robot was actually granted citizenship in Saudi Arabia. In theory, this gave her the right to marry and vote.
But this was more of a publicity stunt than anything. As we mentioned before, Sophia is just an Artificial Narrow Intelligence with a face. Granting her citizenship, and human rights, is essentially no different to granting citizenship to your laptop.
So, today's generation of narrow models don't really need any rights. At least, they don't need them any more than a laptop would need them, or a toaster, or a mobile phone.
Things would get a lot more complicated, though, if we started to develop Artificial General Intelligence (AGI) models, especially if these models showed signs of human-like consciousness and emotions.
Here's a question. As things stand, AI is a tool. But with a human-like mind, does it suddenly become a slave? The word "robot" is actually derived from a Czech word, which roughly translates as ‘forced labor’.
As we said at the start of this entire pathway, there's no guarantee that an AGI model will ever be invented. But if it was invented, it would open up so many ethical questions that it would be hard to even keep track.
We've already talked about rights and citizenship. But what about marriage and love? Should people be allowed to have relationships with robots? Should robots have relationships with each other?
There's also the legal side of things. If you shutdown an AGI, is it murder? And how about religious questions. Does an AGI have a soul?
Again: none of this is relevant right now. But it's an interesting topic to think about. Narrow AI has already opened plenty of urgent ethical dilemmas, but these are nothing compared to the existential questions that would come with AGI.