The Future
The next few years
At last, we’re reaching the end of this pathway. But it’s important to remember: the story of AI is still only just beginning.
It’s hard to predict what’s coming next, but over the next few years, there are bound to be lots of developments. Better neural networks, more impressive robots… and with every passing year, more and more people will start using AI as part of their daily lives.
Just think: a few decades ago, barely anyone owned a mobile phone, and even fewer people used the internet. Now, we use these technologies so much that it's hard to believe we ever lived without them.
In 2024, Sam Altman (current CEO of OpenAI) said that the next big step for AI models would be their implementation as multi-purpose personal assistants.
Imagine an AI on your phone, which Altman described as a "super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had."
Think of all the questions you could ask it: "Hey, what was the name of the guy I met at that conference last year?" It could do other things too, like field emails, clean up photos, give advice, and so on.
Something like this raises more ethical questions (how would you feel about an AI having access to all your personal information?) but it's hard to deny that it wouldn't be an extremely useful tool.
In the near future, we're also likely to see more powerful and complex AI models developed for industries like healthcare, law, and education.
At the moment, doctors waste so much time on note-taking and administration. If these tasks could be done by AI models, it would free the doctors up to spend more time with patients. According to a report by MedTech Europe, this extra time would be the equivalent of hiring 500,000 additional doctors (in Europe).
Meanwhile, the education industry could benefit from a lot more personalization and custom learning for students. AI could keep track of the needs and interests of individual students, and help teachers to provide the lessons that suit them best.
It could also have a major impact on accessibility. Imagine, for example, an NLP model which could read the whiteboard to students with vision impairment.
Over the next few years, we're also likely to see some major leaps forward in robotics.
In particular, we could see the first humanoid robots with genuine real-world uses. Tesla, for example, is currently developing a robot called Optimus, which they describe as a "general purpose, bi-pedal, autonomous humanoid robot capable of performing unsafe, repetitive or boring tasks."
Robots could also lead to a bit of a revolution in machine learning. According to Yann LeCun (Chief AI Scientist at Meta), there's no bigger dataset than the actual world – and robotics can allow a neural network to experience this data first hand.
To put it into perspective: the most powerful neural networks in the world right now have been trained on the entire internet. But by the time a real child is four years old, they've experienced roughly 15 times more data. If a robot could have the same experience, its learning would be off the charts.
A word of caution: there's a lot of hype around AI right now. But there was also a lot of hype about it in the 1960s... then the technology hit a wall, and we plunged into the AI winter.
Experts aren't really expecting that to happen again. But there could be some hurdles in the next few years, especially surrounding those ethical questions we've talked about. For example, if governments crack down on data use or energy consumption, it could slow the industry down.
There's also public opinion to think about. As we said, there's plenty of hype around AI... but would that change if we felt like neural networks were invading our privacy, or humanoid robots were taking our jobs?
Either way, we have an interesting few years ahead of us.
Singularity
While we're talking about the future of AI... there's one more thing we should mention. Experts call it the singularity – and some of them think it's the greatest threat to life on Earth as we know it.
Specifically, this refers to a moment in time when an AI model reaches a level of intelligence which is greater than the intelligence of humans. Think of human intelligence, and Artificial Intelligence, as two lines on a graph. The singularity is the point where the two lines cross.
As we've mentioned repeatedly in this pathway so far, that moment is surely still a long way off, and there's no guarantee that it will ever even happen at all. But let's assume, for a moment, that it will happen. Why would that be a threat to life on Earth?
Here's the thing. If humans created an AI model that was slightly more intelligent than us... then that AI would surely be smart enough to create an AI of its own.
That second AI would be even smarter. After all, it was made by a slightly better creator. And in turn, that second, smarter AI could create an even smarter model. This process would repeat again and again, with each AI being smarter than the last.
Scientists call this recursive self-improvement. In a matter of years, we could end up with Artificial Super Intelligences which are thousands of times smarter than humans.
Now, we have to assume that these self-improving Artificial Super Intelligences would require a lot of resources. How else would they power their systems?
And that's why all this could turn out to be a threat to life on Earth. If these models saw humanity as competition, there's a chance they would wipe us out. Why would they bother keeping us around?
In 2014, Professor Stephen Hawking said this: "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." Elon Musk, meanwhile, has called this phenomenon "our biggest existential threat."
Just to repeat this one more time: there's no guarantee that we'll ever reach the singularity. Even if we do, the idea of recursive self-improvement eventually leading to human extinction is totally hypothetical.
But it's another important ethical question that we need to bear in mind: if we start to get close to the singularity, is it a good idea to continue?
For now, there are so many reasons to be excited about the rise of AI. That's what you should try to focus on as you come to the end of this pathway. But at the same time, it's important to reserve a little bit of caution.
It's the same with any new technology. While enjoying the benefits, we also need to prepare ourselves for challenges along the way.