Development of artificial general intelligence is becoming an increasingly urgent task, since many problems can not be completely solved by highly specialized solutions. Besides, the existing narrow solutions are very expensive in development and require an individual approach to their implementation.
Let's consider one of the approaches to creation of an intellectual kernel capable of solving various problems in arbitrary environments with limited resources.
The intellectual core is a complex software system that includes various methods of storing and processing knowledge. It is the hybrid model that consists of knowledge layers. Those layers have different levels of representation and abstraction. They can be assembled and configured in a way, that it would be possible to assemble intellectual systems for different purposes on the basis of the kernel: from intellectual assistants to Robotic Process Automation.
The developing kernel is a set of components called layers. Each layer combines knowledge and methods of the processing at a certain abstraction level. The layer can be created within the framework of some known model, or it can be a hybrid that uses several approaches to solving problems at once. There can be two types of layers: physical and logical. The physical layer is a separate technology, the logical layer is a separate body of knowledge.
Let's look at the individual elements of the intellectual core.
It contains abstract knowledge about the world, its structure and basic connections. It serves to form a general world-image or world model. It is necessary for the entire system functioning.
Knowledge is stored in a special version of the semantic network. There are no symbolic representations of natural language entities in this network. It stores connections of one entities to others instead. Metaknowledge allows you to determine the trueness of knowledge, its completeness, evidence, and so on.
The layer elements are entities and connections. An entity is a class of objects or a surrounding or fictional environment phenomena. An entity can be represented in any way: text, image, sound, video, data structure, etc. Connections allows us to unite entities with each other and form a general picture of the world [1]. There are 13 types of such connections.
It contains knowledge about the world in its variety, taking into account the place, time, mode of action and other parameters. It represents one layer of a multilevel memory model. A significantly reworked frame model is the main knowledge representation model.
There are predetermined types of slots that can be used to carry any knowledge expressed in a simple common sentence. Slot values are entities defined in the abstract ideal layer. Complex sentences are represented as trees, the vertices of which are fact frames. Their edge defines the type of connection of facts among themselves.
It contains knowledge about reasoning. The reasoning itself is not something that is originally built into the system, but is one of the knowledge types. The logical layer uses production systems and boolean algebra functions [2]. The problem of choosing and applying products is solved with additional meta-knowledge of their activation. Elements of products and logical formulas are facts from the factographic layer.
The task layer allows to implement a universal algorithmic system on artificial intelligence knowledge. At the layer of tasks base lies a modified Petri Net and connected to it local solutions space [3]. This local solutions space is a subset of the abstract ideal and factographic layers elements. It also serves as a storage for context of a problem, that is being solved. An arbitrary number of tasks can be handled at the same time, and the algorithms for solving each of them can be parallel.
They are used to store knowledge of a specific field of application. The knowledge of these layers can be located in different physical layers of the kernel.
It is used to store knowledge about the current implementation of the artificial intelligence kernel: its settings, the peculiarities of behavior and related to it events.
It contains knowledge about subjects with which it interacts. The model of the subject is built up on all physical layers as a set of knowledge about it.
This layer contains knowledge about dialogue, its strategies and elements. It also stores the start and the end of a dialog, defines and changes the topic, chooses the directions of dialogue development, and processes various dialog situations./p>
Memory consists of seven layers, that differ in access speed, structure and storage methods. Such large number is necessary for running various intelligent kernel-based solutions on different devices. Even if the devices have strong limitations on the amount of data stored. The knowledge can be located on any of the seven layers and can be moved from layer to layer if needed. There is also a mechanism for forgetting knowledge that is no longer relevant.
The first four layers are based on the mechanisms that are already in the AI. This way AI has quick access to knowledge, and they are in the form that can be instantly used during the thought processes:
Operating: Knowledge valid at particular time
Operational: Knowledge existing within the framework of one dialogue with the system, local decision spaces
Permanent memory: Basic ideas about the world and the most important facts
Personal memory: Knowledge that relates to the core itself, its functioning and development
Structured memory: Contains a huge amount of structured data, relational databases are used
Marked memory: Unstructured data with the addition of metadata, non-relational databases are used
Original: Any data in its original form - files of various formats with metadata, protected by encryption and blockchain technology. The distributed storage system is used
Interactions between layers and knowledge mechanisms work through a hierarchical, multi-agent system. Some agents work within a particular layer, while others carry out communication between individual kernel modules. Agents can call each other to solve problems and compete for system resources.
There are two ways of creating knowledge agents in the system. The first one is low-level — it is a code written in an interpreted programming language.
The second one is high-level. It operates in the tasks layer with the help of Petri Net.
Language modules convert incoming text into the internal representation of the kernel and synthesize that internal represantation into text. Currently they are not a part of the kernel, but they use the existing kernel knowledge in their work. It allows us to simultaneously convey morphological, syntactic and semantic text analyses and to reduce the number of parse trees to a minimum, cutting off impossible options.
The above mentioned architecture allows to solve a number of problems, that stand on the way to artificial general intelligence:
AI transparency: All AI’s chains of thought and lines reasoning can be tracked, documented and explained
A one-time training: The structure of knowledge of AI allows to transfer it from one kernel to another without any leakages of data. Moreover, it is possible to input text form data in the system.
Fast AI learning: The system can be trained with any amount of raw data. All training requires one iteration. There are methods to control and edit new knowledge.
Structured learning: knowledge is arranged in a multi-level hierarchical structure. Learning mechanisms allow to check the consistency of new and old knowledge.
Solving the problem of catastrophic forgetting: Any level of knowledge can be tought, without losing the previously added knowledge. In addition, there are mechanisms of forgetting that do not lead to the loss of integrity of knowledge.
The possibility of incremental learning: Knowledge can be gradually accumulated in the system. It is possible to store contradictions, vague knowledge, knowledge of false information.
Nowadays, the middle level ontology with more than 70 thousand classes of entities and their relationships is implemented in the abstract and ideal layers. Mechanisms and interfaces for editing ontology, facts, logical conclusions and tasks have been created. Language modules for Russian and English languages have been written. It is possible to work with several intellectual cores and transfer any knowledge between them. Text-based learning mechanisms have been implemented. Several prototypes for using current versions of the intellectual kernel in applied tasks have been created.
Existing usage of the described approach has shown that it is quite suitable for solving a wide spectrum of intellectual problems, It can be used for a base model of the artificial general intelligence. The further work will be directed on improvement of knowledge structures in all layers of an intellectual kernel. The automation of processing agents creation will also be carried through.