Nowadays, it is not always clear how exactly is interpreted the term “artificial intelligence”. The wide use of this word combination lost its original meaning in different contexts. The article proposes the modern AI classification in the form of its development stages.
The problem with understanding the term “artificial intelligence” lies in the word “intelligence”. There is no the only one and appropriate term for “intelligence” for all the researchers. A comprehensive purposeful study of this phenomenon began even in the 19th century, and at present time such sciences as psychology, philosophy, anthropology, biology, linguistics, and computer sciences are engaged in the study of intelligence. Separate areas of research were formed in each of the sciences. Although, there are prerequisites for creating a unified theory of intelligence based on the synthesis of all scientific results, the humanity is still at the beginning stage of understanding this phenomenon.
Let us abstract from the wide concept of “intelligence” and focus on what is meant by the term “artificial intelligence”. The connotation of this term has significantly changed since its emergence. Its original meaning was established during the famous Dartmouth AI Workshop in 1956. Here is an excerpt from the workshop request :
“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”.
For a log time, in general terms, the AI was understood as a system with all above mentioned abilities. But in recent decades, especially with the new wave of interest in the topic, the spread of deep learning technology, and wide media coverage the meaning of the term has changed in the mass consciousness. Now the term is used almost in any situation and often causes a confusion of the concept. It is worthy of note that no existing system can be called AI in the classic sense. But since we can no longer change the mass conscious, we should consider separate understandings of what is called AI.
Let us place some AI terms on the horizontal axis, where we suppose an increase of some “index” of intelligence from left to right. We will not go into what this “indicator” is, just suppose that those intelligent systems on the right are more intellectual than the systems on the left.
As you can see, with maximum detailing of the existing AI concepts, it is possible to determine four separate terms (stages of development) of the AI. Moreover, only AGI and Strong AI belong to the classical definition while the systems located more to the left of Narrow AI belong to the modern interpretation. Let us consider each development AI milestone separately. Obviously, there are no clear boundaries from stage to stage.
The simplest intelligent systems include feedback systems. It is the systems that receive some signals from the outside world and adapt their work to changing conditions. There is an example with a traffic light that receives data from cameras, adapt its work to the number of cars waiting to pass the intersection. Such a system can use fuzzy logic algorithms. It is worth to note that even a system built on a large number of condition operators can demonstrate “intelligent” behaviour. There are even precedents for such programmes to get high scores in artificial intelligence competitions.
All the existing AI systems belong to the Narrow AI group (narrow or weak AI). Such systems are based on one of the AI methods: most often these are neural networks of deep learning but there can also be expert systems, genetic algorithms fuzzy systems, etc. They solve one certain problem under strictly defined boundary conditions. Already in most cases they do it better and faster than a human. The problem is only with boundaries of application as they are quite narrow. Besides, there was a problem with applicability of decisions during the implementation of narrow AI systems. Now the most known systems are those which are based on deep learning but currently they cannot explain how they made this or that decision. Therefore, in China, they cut off the neural network that determines corrupt officials; in recruitment agencies — staff recruitment, in banks — systems of accepting loans. The training of such systems has led to discriminatory decisions based on gender and race.
The next step that many artificial intelligence researchers are now moving towards is Wide AI. The main task of such systems is to overcome the narrowness of applicability. There is a list of problems that need to be solved in these systems. Here are just a few of them:
As can be seen, the goal of this step is to expand the capabilities of existing AI systems, allow them to constantly learn, share knowledge and expand the boundaries of application in solving still quite “narrow” problems. An example could be an AI that plays a certain class of games (logic, card, board) and it only needs to read the rules to start playing a new game of the same class. Also we can take as for an example the systems that control complex processes that should be able to immediately build new parameters and behaviour algorithms into without losing control. This AI class is likely to be built using the hybrid approach, where individual technologies AI building technologies work together, complementing each other.
The next next stage of AI development is artificial general intelligence (AGI). AGI is the ability of a system to solve any problems in complex environments with limited resources.
This definition is as broad as possible and requires further clarification of what any problems are, what complex environments should be, what resources we are talking about and how limited they are. Unfortunately, there is no clear understanding of what such a system is. In other words, we can say that in terms of its intellectual abilities, it should be at the human level. This raises the question whether not how to achieve this, but how to check it. Firstly, it is necessary to develop a certain set of criteria, the achievement of which will mean that we have created AGI. Secondly, it would be nice to have some numerical indicators by which it is possible to compare the two artificial intelligence systems and determine which one is more intelligent. Both problems have not yet been solved even approximately. As a criterion for achievement, problems or challenges are often suggested that, presumably, cannot be solved using Narrow AI or even Wide AI. These could be:
A really good practical solution to the problem of AGI development metrics does not currently exist. Some authors believe that measuring partial progress in AGI as a whole is extremely problematic, in simple terms, because an AGI system will not exhibit AGI properties until it is built entirely. But attempts to develop such a metric are accepted. These include the Universal Intelligence Quotient , the Algorithmic Intelligence Quotient , the General AI Challenge , and the Abstractive Reasoning Corpus test.
Even more questions arise with the last stage of AI development – strong artificial intelligence. This term is often used in the sense of AGI. But it would be more correct to characterize such an intelligence as one that exceeds the level of human intelligence many times over. The most widespread idea is that if AI reaches the human level, it will not stop there, but will develop itself cyclically, increasing its capabilities at each iteration. Such development will be exponential, uncontrollable and, in general, may represent an existential problem for humanity. Here we are faced with the absence of well-developed theories and models of intelligence, and the above statements have not yet been confirmed by anything. If the intelligence builds models of reality, then building more and more complex models will require more and more resources. Absolute accuracy cannot be achieved and there will always be contradictions. The amount of resources to the model accuracy is in an exponential relationship. And, accordingly, the growth of artificial intelligence in the long term, without taking into account jumps when the material base changes, rather corresponds to a linear or logarithmic law.
The proposed classification makes it possible to more accurately determine the meaning of the term AI in each context of its use, and, accordingly, to limit the range of tasks and requirements imposed on it at each stage, which, in turn, will improve communication between specialists using this concept. We are only at the beginning of the way to strong artificial intelligence, and there are still many theoretical and practical problems to be solved in understanding and creating it.