
Researchers from IBM Research Zurich and Eth Zurich have recently created and presented a neuro-European-Symbolical (NVSA) architecture to the community. This architecture combines in synergy two powerful mechanisms: deep neural networks (DNN) and symbolic vector architectures (VSAS) to code the interface of visual perception and a probabilistic reasoning server. Their architecture, presented in Nature machine intelligence Journal, can overcome the limits of the two approaches, more effectively resolving progressive matrices and other reasoning tasks.
Currently, neither the networks of deep neurons nor symbolic artificial intelligence (AI) only demonstrate the level of intelligence that we observe in humans. The main reason is that neural networks cannot share common data representations to obtain separate objects. This is known as the link problem. On the other hand, symbolic AI suffers from an explosion of rules. These two problems are at the heart of neuro-symbolic AI, which aims to combine the best of the two paradigms.
Neuro-latestory-symbolic architecture (NVSA) is specifically designed to solve these two problems using its powerful operators in multidimensional distributed representations, serving as a common language between neural networks and symbolic artificial intelligence. NVSA combines networks of deep neurons, known for their mastery of perception tasks, with the VSA mechanism.
VSA is a calculation model that uses multidimensional distributed vectors and their algebraic properties to perform symbolic calculations. In VSA, all representations, from atomic structures to composition structures, are multidimensional holographic vectors of the same fixed dimensionality.
The VSA representations can be composed, broken down, explored and transformed in various ways using a set of well-defined operations, in particular the connection, non-fusion, permutation, reverse permutation and associative memory. Such composition and transparent characteristics allow the use of VSAs in analogy reasoning, but VSA has no perception module to treat raw sensory inputs. It requires a system of perception, such as a symbolic syntactic analyzer, which provides symbolic representations to support reasoning.
During the development of NVSA, researchers focused on solving the problems of visual abstract reasoning, specifically widely used Qi tests called Raven's progressive matrices.
Raven's progressive matrices are tests designed to assess the level of intellectual development and abstract reflection skills. They assess the capacity of a systematic, planned and methodical intellectual activity, as well as a global logical reasoning. The tests consist of a series of elements presented in sets, where one or more elements are missing. To resolve Raven's progressive matrices, respondents are responsible for identifying the missing elements in a given set from several options available. This requires advanced reasoning capacities, such as the ability to detect abstract relationships between objects, which can be linked to their shape, size, color or for other characteristics.
In initial assessments, NVSA has demonstrated great efficiency in the resolution of Raven's progressive matrices. Compared to modern deep neural networks and neuro-symbolic approaches, the NVSA has reached a new average precision record of 87.7% on the Raven data. NVSA has also reached the highest accuracy of 88.1% on the I-Raven data set, while most of the in-depth learning approaches have undergone significant accuracy drops, on average less than 50%. NVSA also allows a real -time calculation on processors, which is 244 times faster than functionally equivalent symbolic logical reasoning.
To resolve Raven's matrices using a symbolic approach, a probabilistic abduction method is applied. It is a question of looking for a solution in a space defined by previous knowledge on the test. Previous knowledge is represented in a symbolic form by describing all the implementations of possible rules which could govern the tests of the crow. In this approach, to seek a solution, all valid combinations must be crossed, the probabilities of rules must be calculated and their sums must be accumulated. These calculations are at high calculation intensity, which becomes a bottleneck in research due to the large number of combinations that cannot be exhaustively tested.
NVSA does not encounter this problem because it is able to perform such extensive probabilistic calculations in a single vector operation. This allows him to solve tasks such as Raven's progressive matrices faster and more precisely than other AI approaches based solely on deep neural networks or VSA. This is the first example demonstrating how probabilistic reasoning can be performed effectively using distributed representations and VSA operators.
NVSA is an important step towards the integration of different AI paradigms in a unified framework to resolve tasks linked both to higher level perception and reasoning. The architecture was very promising by effectively and quickly solving complex logic problems. In the future, it can be tested and applied to various other problems, which can potentially inspire researchers to develop similar approaches.
The library that implements NVSA functions is available on Github.
You can find a complete example of resolution of Raven matrices here.
