This site uses cookies to deliver our services and to show you relevant ads and job listings. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms & Conditions. Your use of Stack Overflow’s Products and Services, including the Stack Overflow Network, is subject to these policies and terms. I Accept

SNN Autoencoder

To build AI that is superior to today's deep learning, we need better neural networks, with more powerful processing in both space and time, and the ability to form complex circuits with them, including feedback. We use spiking neural networks, which have signals that travel between neurons in time, gated by synapses. Our spiking neural network for doing sensory encoding is a Bidirectional Interleaved Complementary Neural Network or BICHNN and is the equivalent of a cortical column in the human brain, a small computational unit with specific structure and connections wired in a specific way to do a specialized job.

We design, evolve, and train these artificial cortical columns in a tool called NeuroCAD, that we plan to have available for public use in 2022:

Here is our 'first light' implementation of NeuroCAD running on CUDA with a 100,000 neuron, 10 million synapse simulation of a cortical column driven by input from the top layer. We would need a million of these running on the ORNL Summit supercomputer's 28,000 GPUs to match the human brain today, but we can do a lot of functionality with much less than that for now.

Click to Play Video

The cerebral cortex of the human brain consists of a sheet of about a million of these columns stacked side by side, so we need a supercomputer of about 30,000 GPUs to run a whole brain, which is roughly the scale of the ORNL Summit supercomputer I worked on at NVIDIA: https://www.olcf.ornl.gov/summit/

But, we are using these on a smaller scale for now, to autoencode inputs from vision, audio, and other data into compact engrams, learning to do so unsupervised. This means that, exposed to different input types and data, these autoencoders learn to see, hear, recognize speech, and do many other sensory tasks as well as encode arbitrary data streams.

This simple circuit allows a sensory system to learn unsupervised, and keep learning even when deployed, so it can fill in the blanks in its knowledge as it moves around and experiences this world. Without this capability, true computer vision, robust speech recognition, and interpretation of the environment and data inputs would be impossible to fully realize, and this is one of the reasons Deep Learning is a dead end, is that it can only train on fixed data sets.

Our BICHNN SNN essentially it wraps the functionality of CNNs, RNNs, LSTMs, and other DL neural nets into a single, more general architecture that can be specialized by evolution via  genetic algorithms to do most sensory and data encoding / decoding tasks. The output is compatible with present DL methods for clustering, PCA, training predictors,... and we are working on more advanced AGI that it is a component in.

The human brain is represented by only 8000 genes, and decoded by the growth process during fetal development. We will do the same, because we can’t run genetic algorithms directly on 100 billion neurons, but we can run genetic algorithms on on a few thousand genes to do it much more efficiently, then expand the cross-bred genes to 100 billion neuron brains.

We shape the SNNs with Genetic Algorithms, augmented by using a compact genome that can be expanded into a full neural net, so we have a smaller parameter search space. We define every neuron, synapse, and neural network parameter and how they are organized into layers and cortices and brains - by a genome. So as we breed generations of ever more sophisticated SNNs, shaping them with user-specified data sets and selection criteria, each generation evolving more efficient neural networks specialized for specific purposes.

These encoded inputs can be combined into a composite engram that includes all the sensory modalities being sampled, allowing them to be associated in time and space.

And these engrams can be encoded from any data, for any application, like in medicine, to encode the state of the person's symptoms, vitals, and medical imaging into a engram

This gives the Bidirectional Interleaved Complementary Hierarchical Neural Net (BICHNN) architecture and the underlying Core AGI the ability to function in almost any area of information science, where a diverse variety of inputs can by coalesced and encoded into a compact engram format and the Core AGI can learn how they evolve in time and make models.