Learning the ‘knowledge’
Our human ability to mentally represent what we know of the world enables us to interact with our environment and shape our future.
How do London taxi drivers learn all the routes in London?
Take research into London taxi drivers. Since 1865, to obtain a license it has been mandatory to acquire detailed knowledge of the 25,000 streets within 6 miles of Charing Cross – known as the ‘Knowledge.’
Unsurprisingly, it can take up to 3 years to cram that amount of information into someone’s brain. Once done, would-be cabbies are then questioned on the shortest, most scenic, and fastest routes between multiple points within the city of London.
Research shows that London taxi drivers have a greater volume of grey matter in their hippocampus and therefore they have deep neural networks, enabling them to store, access, and utilize their ‘knowledge.’
How can AI mimic this reasoning?
For an AI system to mimic our human potential for reasoning, it has first to identify what content must be captured in its knowledge base and then how to represent facts about the world in such a way as to support advanced reasoning.
Our knowledge is processed in our natural language, while AI uses a variety of different programming languages to learn knowledge.The AI must develop its meta knowledge – which is knowledge about knowledge.
What is a knowledge base in AI?
Knowledge bases in AI are a place where all the knowledge systems are stored and catalogued for AI. Using keywords and filters, AI can quickly and easily find the information that it needs.
Representing the world through ontologies
What is ontological engineering?
Representing real-world environments requires more general and flexible representations than artificial ones – such as game simulations.
‘Ontological engineering’ is the name given to the study of representing, naming, and defining such categories of information.
How is ontological engineering organised?
After all, it’s not straightforward. AI must capture events, times, physical objects, and beliefs to support reasoning. AI must master both procedural knowledge and declarative knowledge.
Such knowledge can be held within frameworks known as ‘upper ontologies,’ with more general concepts such as abstract objects and generalized events found at the top.
More specific ones are found at the bottom, including measurements, like times and weights, and things, like animals, and vegetables.
What are general-purpose ontologies?
General-purpose ontologies are often created to be valid and applicable in multiple special-purpose domains.
They make it possible to combine or unify reasoning and problem-solving in multiple areas simultaneously.
And gaps are left because, no matter how much information is available, our knowledge of the world is never fully complete.
How we organize and categorize the knowledge we hold about the world is vital to how it is used.
Indeed, tests with humans have shown that, when primed with the word ‘dog,’ we more quickly identify the word ‘cat.’ It seems our knowledge is organized and accessed in such a way to facilitate reasoning.
How does AI organise and classify knowledge?
AI attempts to manage its knowledge representation and reasoning by organizing objects into categories before breaking them down into subcategories (or subclasses), with objects inheriting properties – known as a taxonomy. This helps a knowledge base in artificial intelligence grow.
For example, ‘terriers’ may form a subclass of ‘dogs.’ If dogs have the properties of ‘barking’ and ‘chasing postal workers’, we might infer that terriers do too.
We know taxonomies work because they’ve been used for centuries in technical fields like science.
Just think of Victorian scientists pinning unfortunate beetle specimens into glass cases or massive libraries such as the British Library or the Library of Congress stacked with millions of labeled books accessible through the Dewey Decimal system.
You can measure everything – almost!
Look at what is around you, choosing a random selection of objects. Each one has measures, values we assign, such as a cost, height, and weight, which can be measured in units such as dollars, inches or centimeters, and kilograms or pounds.
Note that not everything can be quantified so easily. How much does Instagram weigh? And how tall is the Oasis hit, Wonderwall?
How can you calculate qualitative data?
Yet, many measures are quantitative and, therefore, easy to capture and represent. But what of qualitative data? How do I score the deliciousness of strawberries or the beauty of the latest galaxy picture taken by the James Webb Space Telescope?
Such ‘natural’ categories are difficult to measure and represent because they lack specific physical properties or clear-cut definitions.
The problem can be reduced to some degree by recognizing that not all measures are absolutes; instead, they provide a means to order items.
The strawberry is more delicious than the blueberry, and I don’t like olives, so they are, in relative terms, last – for me, at least.
The complexity of events
In the real world, events are rich and complex and far from discrete. For example, when I overslept yesterday and missed the bus, it could have been that I was simply tired and didn’t hear the alarm, or there had been a power cut in the night, resetting the time.
Either way, failure to be woken up meant I was late for work. Events were connected.
What is ‘event calculus’?
‘Event calculus’ attempts to capture if something has happened or is happening, its start and end point in time, its initial state and end state, and what else is happening simultaneously. This is one of the ways to increase artificial intelligence’s knowledge base.
The approach allows objects to be thought of as generalized events, a chunk of space-time that persists and changes over time.
For example, the USA can be considered an event that began in 1776 with 13 states and today has 50 – its population and presidency changing over time.
Therefore, President(USA) is a single object that has been made up of different people, varying with time.
Knowledge about beliefs
The ability to store existing beliefs and create new ones is vital to AI. And yet, to infer beyond what it already ‘knows’ requires the capacity to have knowledge about its beliefs and to reason.
After all, if you spend hours cooking your partner what appears to be a splendid meal and they pull an uncomfortable face, you might assume your efforts were wasted.
How can AI understand beliefs?
Therefore, AI needs to be able to model the mental objects within its knowledge base to answer questions about them. ‘Propositional attitudes’ are typically used to capture attitudes such as ‘believes,’ ‘knows,’ ‘wants,’ and ‘informs.’
For example, 4-year-old Sam knows Santa exists:
‘Modal logic’ then allows sentences to be treated as arguments, and even enables reasoning about possible worlds – rather than having only 1 true world. While 1+1 = 3 is most likely not accessible by an AI agent, or program, in any world, it is possible to imagine a world where the Easter Bunny delivers the presents at Christmas.
Semantic networks and visualizing a knowledge base
The idea of ‘semantic networks’ has been around for a long time, providing a graphical aid for visualizing knowledge and a powerful tool for inferring object properties based on category membership for machine learning.
How do semantic networks work?
While there are many variants, each tree in the semantic web has the potential to represent individual objects, their relationships with other objects, and their ability to inherit properties from the categories they belong to.
For example, Jay may be a member of the category ‘pilots’ that is a subset of ‘staff’ that is a subset of ‘GoFly Airlines.’
Inheritance becomes particularly complicated when an object belongs to multiple categories and, for that reason, is disallowed under some object-oriented programming languages.
First-order, or predicate, logic allows the use of sentences that contain variables and can be used alongside semantic networks to say things about objects and the properties of categories.
Such description logics arose in response to pressure to formalize semantic networks, making it easier to define and describe the properties of each category.
They are particularly helpful for comparing category definitions to identify subsets – known as subsumption.
What do description logics also help with?
They also help with classification (determining if an object belongs to a category) and consistency (recognizing whether membership can be logically satisfied).
The inferences drawn from knowledge representation systems are typically a ‘default’ rather than a certainty.
What is a ‘truth maintenance system’?
And a ‘truth maintenance system,’ used to identify inconsistencies in data may handle ‘belief revision’ when new information overrides or replaces what is already known.
Sentences are ‘told’ to the knowledge base and numbered to keep track of revisions.
Truth maintenance may not solve all AI’s problems regarding maintaining valid beliefs and, yet, massively increases its ability to handle environmental and hypotheses complexity.
Perhaps some would like to imagine that specialized human talents, such as ‘creativity,’ will always remain beyond the abilities of AI. And yet, this may not be the case.
While reasoning, rational, and logical thinking are vital to AI, what adds uniqueness to humans is our ability to think creatively.
While currently, true creativity is probably beyond the limits of AI, work is underway to explore and harness the power to come up with new ideas and show genuine innovation.
How can AI start to be creative?
And we are starting to see some success. DALL.E 2 can create original images and art from a text description, combining concepts, attributes, and styles, to create something novel.
And creativity is not only found within the world of art. Something surprising happened in the now infamous move 37 of the second game of Google’s DeepMind win over Go world champion Lee Sedol.
The AI seemed to be rewriting the rules of Go and played a move no human would have imagined.
Lee, who ultimately lost the match 4-1, was dumbstruck: “This move was really creative and beautiful,” he said. It seems that AI creativity may not be something for future theorists to discuss but for AI historians to recognize in past successes.
Knowledge Representation Issues in Artificial Intelligence FAQs
What are the various issues in knowledge representation AI?
There are many issues you can encounter when working using an AI system for knowledge representation. These can include some representations that can be restrictive and challenging to work with and some knowledge systems might not be very natural for the operator to understand.
How many issues are there in knowledge representation?
With certain knowledge representation systems, there will be pros and cons to each method. Depending on the information you want the AI to find, you might struggle with certain issues with some of the knowledge representation methods.
For example, if you choose semantic representation to find new knowledge, you might find that the network itself isn’t very intelligent or that the solution that you’re searching for doesn’t exist in that network.
The issues that you will find depend on the knowledge systems you want to use.
What are the 4 ways of knowledge representation in AI?
In data science, there are four different types of knowledge representation in AI. These include:
- Logical representation
- Semantic network
- Frame Representation
- Production Rules