Kinnu

Symbolic Programming

What is symbolic programming?

Earlier, we saw how symbolic programming was at the heart of the AI golden age. In simple terms, this meant giving a computer a tree of instructions, which effectively allowed it to 'make decisions' by following logical rules.

As you probably remember, symbolic programming is less popular now than it used to be. Most modern models, like AlphaGo and ChatGPT, use a totally different approach. They make their decisions using a web of neurons, which scientists call a neural network.

We'll learn all about that later. But first, we're going to take a look at symbolic programming in more detail. Yes, neural networks are more popular and more powerful. But in certain contexts, symbolic AI is still a very effective approach.

Some people even refer to it as Good Old Fashioned Artificial Intelligence (GOFAI). That's why we're starting off with it. Good old fashioned stuff first, neural networks later on.

First thing's first: you're probably wondering why we call it symbolic programming. Well, it's because this approach uses symbols to represent ideas and objects.

When we say 'symbols', we're not talking about hieroglyphics. Instead, we're talking about labels. Labels which represent something. It's actually pretty similar to the way that a human uses words.

When you say the word "apple", you're just using that word as a symbol for a particular fruit. A particular fruit with a particular taste, and a particular color, which grows on a particular tree.

The whole idea of 'appleness' is summed up by the word "apple". In other words, "apple" is just a symbol which represents that idea.

Apple by Abhijit Tembhekar (CC BY 2.0) <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

A symbolic AI might have access to a library of thousands of symbols, each representing an idea. We call this library a knowledge base – and for good old fashioned symbolic AI, it will need to be programmed by hand.

For example, a programmer might manually add "apple", and associate it with properties like "fruit", "red", and "grows on trees". They might also add "pineapple", and associate it with "fruit", "yellow", and "grows on ground". Do this for a hundred types of fruit, and you end up with a decent knowledge base.

Pineapple by Kaweesaesther (CC BY-SA 4.0) <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

And the AI can use this knowledge base to 'think'. For example, you could ask it, "What is a pineapple?" and it could tell you "A pineapple is a yellow fruit which typically grows on the ground."

Propositional logic

So, we know that symbolic AI is built around a bank of pre-programmed symbols. We looked at a fruit example: "apple" might be a symbol for a number of properties, including "red" and "grows on trees".

What we now need to talk about is how these symbols relate to logic.

Logic, in this context, is just a set of rules which tell an AI how to 'think' about different symbols. Here's an analogy: symbols are the building blocks of symbolic AI, while logic is a detailed instruction guide which tells the AI how to arrange those blocks into shapes.

That isn't a perfect analogy. But it's a helpful idea to bear in mind as we look at logic in more detail.

There are actually a few different types of logic. We're going to start with the simplest: propositional logic.

Propositional logic is built around statements which can either be true or false. For example, a statement like "apples are edible" (true) or "bananas are purple" (false). We call these statements propositions.

These propositions will need to be programmed manually. But they will usually contain a symbol that's already in the system. For example, "the box contains a banana" is a proposition which includes the symbol "banana".

Banana box. (Public domain), via Wikimedia Commons

A proposition might switch back and forth between 'true' and 'false'. Sometimes, there might be a banana in that box. Sometimes, there might be an apple. But it will always be one or the other: these statements should never be both 'true' and 'false' at once.

Along with true-or-false propositions, propositional logic also uses logical connectives. These are words like IF and THEN, which can be used to link a series of statements together.

Let's say you had two propositions: "it is raining" and "you need an umbrella". Using logical connectives, you could link those together into "IF it is raining THEN you need an umbrella".

In effect, these connectives turn the true-or-false propositions into a decision tree. And if you tell the AI "I'm about to go out, should I bring an umbrella?" it could use this tree to give you a helpful answer.

If "it is raining" is true, the AI will tell you "yes, you need an umbrella." If "it is raining" is false, the AI will tell you "no, you don't need one today".

Along with IF and THEN, two more examples of logical connectives are AND and OR. Again, these connectives can be used to turn a set of true-or-false propositions into a decision tree.

Let's look at another example: "IF you keep coughing OR you keep sneezing AND you have a temperature THEN you might have the flu".

This is still quite a simple example. But imagine if you programmed a symbolic AI with every possible medical symptom, plus every possible diagnosis? You'd end up with a pretty impressive model. A computer that uses thousands of symbols, and massive decision trees, to 'think' like an AI doctor.

Predicate logic

Along with propositional logic, a symbolic AI might also use another type of logic, which is known as predicate logic.

With propositional logic, we saw how a symbol and its properties could be turned into true-or-false propositions. For example, "apple" (symbol) and "can be red" (property) could be turned into "apples can be red (true)".

Predicate logic follows a similar principle, but instead of combining the symbol and the property into a statement, it combines them into something called a triplet. This triplet consists of three parts: subject, predicate and object.

In the example above, "apple" is the subject, "can be" is the predicate, and "red" is the object. In another example, "dog" might be the subject, "is" might be the predicate, "mammal" might be the object.

Dog. Image via Pexels

Predicate logic lets symbolic AI do some pretty interesting things.

Imagine two symbols. The first symbol, "dog", has "is mammal" attached to it. The second symbol, "mammal", has "is warm blooded" attached to it.

Your AI turns these examples into triplets: "dog" (subject) "is" (predicate) "mammal" (object), and "mammal" (subject) "is" (predicate), "warm blooded" (object).

Let's present those two triplets in a table, like the one below.

So what's the point of these triplets? Let's take a look.

Imagine that you wanted to ask your AI whether dogs are warm blooded. Unfortunately, "is warm blooded" wasn't one of the properties attached to the symbol "dog". In other words, the AI doesn't know the answer.

However... using some predicate logic, this symbolic AI might be able to work something out.

It looks at those triplets, and identifies that mammals are warm blooded. It also identifies that dogs are mammals. And if dogs are mammals, and mammals are warm blooded...

"Yes," the AI tells you. "I believe that dogs are warm blooded."

Just like propositional logic, predicate logic can be a very powerful tool.

In some symbolic AI models, information is arranged into something called a knowledge graph. This is a complex network of nodes and edges, with the nodes representing subjects and objects, and the edges representing the predicate relationships between them.

Example of a knowledge graph. By Fuzheado (CC BY-SA 4.0) <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

A knowledge graph can help a symbolic AI to draw sophisticated, human-like deductions. It can wind along the graph from node to node, drawing long strings of logic from subject to object to subject to object until it reaches new decisions and conclusions.

User: "Could Artificial Intelligence exist without sunlight?"

AI: "AI is made by humans, and humans eat animals, and animals eat plants, and humans also eat plants, and plants need sunlight, so no, in conclusion, Artificial Intelligence could not exist without sunlight."

Types of knowledge

In the 1960s, symbolic AI was used to build an exciting new type of machine.

It was known as an expert system, and it was designed to mimic the decision-making skills of human experts, like doctors, lawyers, and financial advisors.

These expert systems had two main parts. First, a colossal knowledge base, holding thousands of relevant symbols. For example, a medical expert system would have a knowledge base of symptoms and diseases, while a legal expert system would have a knowledge base of laws and case studies.

The second part was called the inference engine: a piece of software which applied propositional logic and predicate logic to the giant knowledge base. This logic allowed the system to 'make decisions'.

Parts of an expert system.

The knowledge base in an expert system could hold many different types of knowledge. If you don't know what we mean by 'types of knowledge'... well, let's look at a few examples.

First, we have something called declarative knowledge. This is basically just a simple, solid fact. You might have the symbol, "coughing", with "is a symptom of flu". Or the symbol "theft", with "is illegal".

Second, we have procedural knowledge. This is more like a process, or a series of steps. For example, instead of saying that "coughing is a symptom of flu", procedural knowledge might be step-by-step instructions that explain how to treat this illness.

Then there's heuristic knowledge, also known as a 'rule of thumb'. This describes general guidelines, or approximate strategies, which might help the AI take shortcuts towards better decisions.

For example, here's a useful 'rule of thumb' for a medical system: "if symptoms persist, it's worth seeing an actual doctor." Or here’s one for a legal system: “if this is a first offense, the punishment should be more lenient.”

Humans will often use 'rules of thumb' when they're making real-life decisions. Heuristic knowledge is just a way to allow symbolic AI to do the same.

Thumb. Image via Pexels

When expert systems were first introduced, there was a lot of hype around them. By the 1980s, they were being used by many of the world's top businesses, and even some universities.

As we've already seen, this hype died away during the AI winter. At the end of the day, symbolic AI was too reliant on manual programming. At higher levels of complexity, there were simply too many symbols and statements for anyone to feasibly produce.

But despite that fact, there's still a place for symbolic AI today. Many doctors, for example, use modern equivalents of expert systems to help them diagnose illnesses. Plenty of businesses and research labs use knowledge graphs to organize data. Even robots and drones use symbolic logic: IF battery is low, THEN it is time to recharge.

Think of it like this: with symbolic AI, there's a threshold of complexity that manual programming can't cross. But for tasks that fall below this threshold? Symbolic AI can still be an effective approach.