A finite-state machine is a kind of abstract model consisting of a finite number of states, a finite set of input and output signals, as well as state transition functions and output functions. Since at a certain point in time the automaton processes discrete information and can change its internal states, finite-state automata are a very convenient tool for emulating the behavior of various objects in non-standard situations.
Let us consider the simplest way of using machines in real life. Let each state of the automaton be represented as an action performed by some game model. Imagine that we want to simulate the behavior of a snake that hunts for prey.
During the hunt the snake sets several main tasks: find prey, track it, wait for the right moment and attack. If the prey was able to escape, then the snake is looking for a new one. As can be seen in Figure 1, the snake's behaviour is rather simple: it just moves, looks for a prey, and attacks at the right time. So, all its actions follow certain rules of behavior and occur in a certain order — this is the main advantage of finite state machines.
Let us complicate the snake’s task: let it meet on its way not only the prey, but also the enemy that it is not able to defeat. In this case the snake will have to run away from the enemy. The snake can meet the enemy at any time when it is in one of the following states: "searching for prey", "tracking", "going home". It is important to understand that if the snake was in a state of "tracking the prey", then the prey disappears, and the snake will have to return to search for prey. If the snake is in the state of "searching for prey", then the snake will still have to return to the search for the target. If the snake was going home, then after leaving the enemy, it does not need to return to search for prey, and the snake will simply go home. Let us depict the changes in the figure:
It might seem that nothing much has changed. However, the programmer who decided to simulate the life of a snake faced a huge problem. Now from the state of "escape from the enemy" the snake can get into 2 states at once: "searching for prey" or "going home". As previously stated, the snake needs to go home if it was already on its way home; otherwise, it must continue to hunt. Stack-based finite machines were invented exactly to solve such problems.
Such a finite-state machine allows putting all incomplete actions on the stack, and transitions will occur when states are added or removed from the general stack. For example, a snake was going home, when suddenly it had to escape from the enemy, then it remembers its state of "going home" (i.e. puts it in the stack) and starts performing the action "escape from the enemy". When the snake completes its current action, it will take the last action from the stack and start performing it. This is what it looks like:
Let us now consider the life of the snake in more detail: except for hunting, let the snake also be able to go out to explore the territory from its home. Then, before going anywhere, the snake's automaton will look something like this:
The state graph of the snake shows that before performing any action, any selected action will generate sequences of new actions. In other words, it is possible to get a new finite-state machine by moving to a new state.
Thus we have seen that a big task can be split into several smaller finite state machines, where each automaton (not of the lower level) can contain other automata as states. Lower-level automata cannot refer to other automata, as this would cause an infinite recursive sequence of call state in the stack.
Often in real life, you have to do several things at the same time, or one action can only happen when it is necessary for several other actions to be performed. Simple finite-state machines cannot cope with such a task, since only one task can be performed at a time. Petri nets can cope with such problems. In an ordinary automaton, events are executed sequentially and cannot contain more than one state at a given time, while the Petri Net allows several processes to be performed independently of each other, and the transition to a new state can be set by the condition for performing other tasks.
Here is a simple example to understand how networks work. Imagine a person comes home from work and doesn't want to waste time preparing a meal. The best way would be to order food from some shop. Also imagine that a person wants to stop at a shop on the way home. Getting into a car, the person might call a food delivery service, and then enter the number of the nearest shop in the navigator. In this case, the solution of the task “order food” and “go to the shop” can be written in the form of an automaton:
However, a person can call the food delivery service and at the same time search for the desired store in the navigator. Then the person can save time and get home faster. A person cannot go until all the necessary actions have been completed. Exactly such a situation can be shown with the help of a Petri net. Two tasks will be the tops of the network, and the transition of the token from one state to another will be organized by the action completion function. It will look like this:
Nowadays, many simple chatbots simulate problem solving by means of ordinary automata. Thus such bots can solve simple human tasks such as “find an appropriate movie on several questions”, “book a ticket”, “order a taxi”. The solutions of such bots do not go beyond solving only one current problem, and the solution to the problem of multiple solutions is reduced to choosing one solution without analysis.
One of the problems of all modern bots is the inability to build their own networks or automata to solve a new problem during the communication with a human. But it is worthwhile to say that in order to build a system that can automatically solve a number of problems and be universal concerning the tasks being solved, it is necessary to build a strong AI. Such an AI must solve a number of simple tasks: be able to determine the meaning of a conversation with a human, be able to extract the necessary information in a structured form from the text, be able to divide complex tasks into subtasks, be able to find the best solution to a problem, taking into account the solution of each subtask.
In order for the system to learn how to build tasks in its brain in the form of Petri nets (or finite-state machines), it needs to master NLP skills: text understanding and the ability to recognize cause-and-effect relationships between situations (actions, facts, etc.) described in the text, as well as the ability to draw logical conclusions from the processed data. For example, during reading the instructions for any machine, the AI must draw conclusions based on the data obtained and fix this data in its memory in order to use it in the future to build a logical chain to solve a problem.
The AI should do the same while reading a fiction book. For example, if the text describes how people choose a car for themselves, what criteria they considered, what they paid attention to, then the AI should remember not the individual conclusions of a person, but precisely the sequence of actions taken when choosing a car. So, if a person starts talking to AI about cars, then the AI must remember what it heard about cars before and what conclusions it drew from that. Thus, the AI will build an automaton or a network inside itself, through which it can continue the dialogue and ask questions. Also, if a person wants to buy a car, then the AI will be able to rebuild the automaton in such a way that during the communication with a human at each stage, the AI will look for the best cars according to the description; coloured Petri nets can be used for this.
The problem with building such an AI approach to understanding the world around leads to the fact that it is necessary to have powerful methods of language processing that will allow the AI to learn new data in the process of communicating with a human or while reading books. For example, it is possible to use neural networks that will be trained on marked-up corpus of texts so that the AI, after training, can itself look in the text for patterns according to which automata will be built in the future. However, this approach is complicated by the fact that compiling a dataset will take a lot of human resources, since the training sample must have a complex structure and a huge volume for each area of knowledge; and the training sample must be translated into several languages, which makes it impossible to train such a neural network. Another option would be to train a neural network to translate text from a linguistic representation into abstract data representations, which can then be processed by a second neural network. But such an AI construction model is not flexible, since each language needs its own neural network and a constantly growing data sample for learning.
The only correct option is to create an approach to AI that would combine the ease of learning new knowledge, independence from language (the ability to apply knowledge gained in one language to all other languages), as well as the ability to build logical judgments from the knowledge gained without human involvement and without special deep learning. This approach will be able to provide the ability of AI to create multiple-choice dialogues from previously accumulated experience, as well as build flexible systems with the ability to build a personal assistant as the AI that will receive from a person only the information that a person will personally share.