
Recently, the models of neural networks have become more precise and sophisticated, which leads to an increase in energy consumption during their training and their use on conventional computers. Developers around the world work on alternative equipment “like a brain” to provide improved performance under high calculation charges for artificial intelligence systems.
Technion researchers – Israel Institute of Technology and the Peng Cheng Laboratory have recently created a new neuromorphic computer system that supports generative and graphic learning models and the ability to work with deep belief neural networks (DBN).
The work of scientists was presented in the journal Electronic nature. The system is based on silicon memristors. These are energy efficient devices to store and process information. Previously, we already have mentioned The use of Memristors in the field of artificial intelligence. The scientific community has been working on neuromorphic computer science for some time, and the use of Memristors seems very promising.
Memristors are electronic components that can change or regulate the electric current flow in a circuit and can also store the load which passes through the circuit. They are well suited to the management of artificial intelligence models because their capacities and their structure resemble synapses in the human brain than the memory blocks and conventional processors.
But, for the moment, the Memristors are still mainly used for analog IT, and to a very lesser extent in the design of the AI. Since the cost of using Memristors remains fairly high, Memristive technology has not yet spread to the neuromorphic field.
Professor Kvatinsky and his colleagues from The Technion and Peng Cheng Lab decided to circumvent this limitation. As mentioned above, the Memristors are not widely available, so instead of Memristors, the researchers decided to use a flash technology available in the trade developed by Tower Semiconductor. They designed his behavior to be similar to a Memristor. They also specifically tested their system with the recently developed DBN, which is an old theoretical concept of automatic learning. The reason for its use was the fact that the deep neural network does not require data processing, its input and output data is binary and intrinsically digital.
The idea of scientists was to use binary neurons (that is to say with a value of 0 or 1) (input / output). This study studied the so -called synaptic devices with two floating gate terminals made as part of the CMOS standard manufacturing process. Consequently, silicon -based soil synapses have been created. These artificial synapses have been called silicon synapses. Neuronal states have been fully folled, simplifying the design of neural circuits, where analog digital and digital converters with expensive analog (ADC and DAC) are no longer necessary.
Silicon synapses offer many advantages: analog conductivity, high wear resistance, long retention time, as well as predictable cyclic degradation and moderate variation of device device.
Kvatinsky and his colleagues created a deep neural network. It consists of three Boltzmann 19×8 Boltzmann machines, for which two tables of 12×8 Memristors were used.
This system was tested with a modified MNIST data set. The accuracy of network recognition using MEMRISTORS based on Y-Flash reached 97.05%.
In the future, developers plan to extend this architecture, apply more of them and generally explore additional Memristive technologies.
The architecture presented by scientists offers a new viable solution to execute limited Boltzmann machines and other DBNs. In the future, it can become the basis of the development of similar neuromorphic systems and help to improve the energy efficiency of AI systems.
You can consult the MATLAB code for a deep learning network based on a bipolar floating door (Y-Flash) on github.
