NeuroCAD-V4.1

NeuroCAD

NeuroCAD is a patented software tool with a UI for designing Spiking Neural Networks that forms the foundation for ORBAI's AGI technology. It allows the user to lay out the layers of spiking neurons, connect them up algorithmically, crossbreed and mutate them to generate a population of similar neural nets, then run simulations on them, train them, and then crossbreed the top performing designs and continue the genetic algorithms till a design emerges that meets the performance criteria set by the designer.

Spiking Neural Networks are a more realistic simulation of how real neurons, neural networks, sensory systems, and intelligence work, with spiking signals that travel between neurons in time, and sophisticated processing and integration of these signals at neurons and synapses. They have distinct advantages over Deep Learning’s simpler feed-forward neural networks, but they are more challenging to design, train, test and work with, so we are building a CAD tool especially designed for working with Spiking Neural Networks.

NeuroCAD has 5 main pages: Layout, Connection, Breeding, Training, and Selection

Layout

On the Layout page, we will lay out the geometry of the neurons, in sheets, columns, cylindrical, spherical, or conical shells, or other geometries. We will be able to choose the neuron models for each layer, create multiple layers of neurons, morph shapes of the layers and position them relative to each other. As above, the geometry matters with spiking neurons because the connections are physical and have distance and transmission time as well as logical connection.

In this screen, the user needs to be able to set the number of layers of neurons, and for each layer: the number of neurons in X and Y, the neuron simulation model to use for that layer, and other parameters for the layer. We have a Layer Properties box that has all these in it, then allow it to be repeated in the UI for every layer

Connection

On the connection page, we decide how the neurons are going to connect to the neurons in other layers, and within the layers.

A connectome is a complete description of all the connections between all neurons in a neural net. It is basically the wiring scheme for a brain or neural network, and can contain enormous amounts of information

A genome is a compact set of information that can describe much more complex systems (like a connectome). For example, in the human DNA, only 8000 genes encode how to build the human brain, which grows to have 100 billion neurons with over 100 trillion connections. Changes in these 8000 genes cause changes in the way the brain develops, how it is structured and how it functions, and minor changes to the genome can cause brains to have greatly reduced form and functionality. There is a mapping from these 8000 genes to the final connectome of the human brain, some sort of expansion, or biological decompression scheme.

Since each neuron has at least 1000 connections, and there are millions of neurons in each sheet, we have a probabilistic way to pseudo-randomly connect the neurons, based on a random seed and procedural probability maps that spatially determine where the neurons are likely to connect. This at first seems crude, but somewhat follows the principle that biological neurons connect by growing and following chemical trails or gradients that have some sort of spatial distribution in the brain. By using composited procedural maps for this in NeuroCAD, we can start with a set of numerical parameters that the user selects that completely describe how the resulting procedural image will look, with these procedural parameters for all the maps and the mixing parameters becoming the genome for this connectome or connection scheme, that can reproduce it all from a few parameters and a random seed.

Parameters (Genome) -> 2D Algorithms -> 2D Probability Maps -> Connectome

In this scheme, let’s take a neuron in the center of layer 1 and determine which neurons it will connect to in layer 2. We will use a 2D Gaussian distribution centered right above the neuron on layer 2, so it is most likely to connect to the neurons directly above it, and less likely to connect further away, as in picture (2D Gaussian). Or we might want to do a procedural fractal, that has the neuron able to connect in a wide area on layer 2 but following a specific pattern as in picture 2 (2D Procedural Fractal). Or, we can do the composite (multiply) of them to get a procedural pattern that is still spatially localized in picture 3.

So we will allow NeuroCAD designers to not only specify probability maps for the connections to the adjoining layer, but to all layers in the network (including current layer), with each having unique probability maps, suited to the functionality that more distant connections and reverse connections have. It is likely that these maps should bias the connection probability towards closer layers, but we will let the designers experiment and see what works best.

Breeding

Because designing spiking neural nets and their connectomes are so complicated, we humans simply have no intuition for laying out or designing these networks and predicting how they will work, nor are there any mathematical or theoretical principles to work from. We simply cannot lay down millions of neurons in a dozen layers and hope to know how to connect them best for a vision network or a speech network. Even if we could get micron-level 3D scans of the human brain’s connectome, the subtle differences between real biological neurons and even our best neuron and synapse models would mean those connectomes would not work on our artificial networks. At best, they could be a guide or a starting point, but even then, we need a way to explore designs from there.

So, we will design these networks the same way nature did, by using natural selection, or rather genetic algorithms to explore connectome configurations (and layer configurations). Genetic algorithms work by creating a bunch of copies of our neural net, and giving them each slight variation from each other, then testing them against a performance metric to see which performed the best. We then take the top 10% of those networks, and we create a new population from them. In order to do this, we cross-breed them.

Create -> Breed -> Test -> Cull -> Breed -> Test (Repeat till test gives high enough score)

In this example, we have 5 networks, and we want to cross-breed each network with each other network to get 25 new networks. Life on earth discovered sexual reproduction 1.2 billion years again, and after being stuck as single-celled, self-reproducing organisms for 3 billion years, life suddenly exploded in variety, complexity, and capability. Sex, or breeding, greatly accelerates evolution, as it allows two distinct successful organisms to combine genetic material, randomly shuffle it a bit, then produce offspring that can be quite different from either of them, and perhaps better adapted than either. Asexual reproduction only allows change to happen slowly, as an organism’s DNA randomly mutates once and a while, and most mutations are NOT beneficial.

Properly set up, genetic algorithms are a very powerful method for finding solutions to very complicated problems that are intractable to normal mathematical optimization methods. However, setting them up can be tricky.

If we built a 1 million neuron network with a billion connections (connectome), and made 100 copies of it, each with slightly different connections modified one by one with an algorithm, then applied genetic algorithms using changes to the connectome as the basis for breeding, it would never work. The space of one billion connections that we are trying to search is TOO HUGE, and even with infinite compute power we will never converge on a USEFUL neural network.

This is why we need a compact genome as the basis set that we use for all the breeding operations, so that when we apply the expansion / unpacking / decompression of each new genome to get a unique connectome, it is a meaningful one. We needed a small gene that will reliably re-create the same connectome, in a deterministic fashion, so we can always reproduce it, and slight variations to than genome should produce connectomes that are close to each other as well, otherwise Genetic Algorithms will not work very well.

Training

Once we have created the new networks in the Breeding page, we need to train them and figure out which ones to keep. First the user inputs their training scenario, which will be a scaled-down version of the ‘job’ that the user wants the network to do, including sample data and the test criteria. If it is a vision network doing object identification, we need images or videos of objects, and labels, be they text labels, verbal labels, or classification vectors, and we need a way to feed them to the network such that it will train.

It is common knowledge that back-propagation and other methods used in training conventional DL feed-forward and recurrent neural networks do not work for spiking neural networks because there is no differentiable transfer function that we can run backward through the network to ‘tune’ weights. Spiking neurons ‘learn’ the same way our brains do, by having synapses that ‘fire together wire together’, described as Hebbian learning or long-term potentiation. Basically, when a spiking input enters the pre-synaptic side of the synapse, is chemically transmitted across the synapse, then causes the post-synaptic side to fire and send a spike down the dendrite to the next neuron. If the next neuron fires within a certain interval of the neuron that originated the signal fired, the synapse strengthens, so that subsequent signals will be more amplified when crossing it. This is the fundamental process by which learning occurs in the human brain and is replicated in spiking neural networks.

However, there is more needed for a spiking neural network to learn to do actual tasks and be useful. The neural network has to be structured in such a way that it is capable of learning and getting better at a task via synapses undergoing long-term potentiation. If we look closely at animal sensory cortices, they have two complementary networks, one that process sensory input hierarchically into more abstract representations for example, in the visual cortex, (retina image ->V1->v2->…-> name), and also one that runs in the opposite direction (name->…->V2 -> V1) that allows us to visualize object. Close your eyes and picture a ‘Fire Truck’, there you go, you saw a fire truck and used your visual cortex in reverse). These two networks interact, and processing of the first is influenced and filtered by the second, and they also reinforce and train each other – part of the learning process. There has been much research in neuroscience in this area by Rajesh Rao, Dana Ballard, and Miguel Nicolelis.

By architecting SNNs in NeuroCAD with ORBAIs Bidirectional Interleaved Complementary Hierarchical Neural Networks, we have two complementary spiking neural networks together, each with signals moving in opposite directions, but interacting and providing feedback to each other, allowing these networks to self-train, just like the human visual cortex does. This method describes a method for training an artificial neural (either spiking, or feed-forward) network where there are actually two networks that are intertwined and complementary to one another, with one transmitting signals in one direction, say from the sensory input, up through a hierarchical neural structure to more abstract levels to eventually classify the signals. There is also a complementary network interleaved with it that has signals that flow in the opposite direction, say from abstract to concrete, and from classification to sensory stimulus. The signals or connection strength in these two networks can be compared at the different levels of the network and the differences used as a ‘training’ signal to strengthen network connections where the differences are smaller and correlation tighter, and to weaken network connections where the differences are larger and not as tightly correlated. The signals can be repeatedly bounced back and forth off the highest and lowest levels to set up a training cycle.

For example, above is what a training cycle would look like with one side taking in input (pictures), and outputting a compact, compressed representation of it (encoded). To train this, we just feed this encoded version backwards into the net to see what output it generates on the picture side while the input picture is still being fed in. All of the layers in the middle are comparing the signal going each way and (if we have correctly set up the BICHNN network) will be training so that it is learning to auto-encode, and the output picture and input picture will converge with training (dreaming?). This is extremely powerful and much better than a standard DL autoencoder, because not only is it able to do this on any type of spatial-temporal data, even constant streams, but it does it dynamically, as it observes the dataset, and later the real world. This is revolutionary.

Selection

In this screen, the user sets up the selection criteria to be used. It may simply be measuring the correlation between the input data being sent in each end and the result coming out that same end after being sent through the network from the other end and reconstructed, to see how closely converged they are.

The user may even want to try them out and see how they look and function, to select them manually. Otherwise, this screen is in automatic mode, and just selects the top N% (N specified in GUI) of the networks for the next selection round. Once the top networks are selected, they are then sent to the Breeding Page and the process is repeated until a satisfactory level of performance is met.

NeuroCAD licensing

For our go-to market plan, we will have the open-use version of NeuroCAD and select Neural Net Modules for academia and research available in early 2022, along with the ORBAI Marketplace for them to trade Neural Net modules. We will License NeuroCAD and modules for commercial use in early 2023, with a monthly license fee per commercial seat of NeuroCAD, and license fees for modules from the marketplace. We will work with key partners to leverage their marketing, sales and customer base, and sell alongside solutions from system integrators and hardware vendors, and will provide NeuroCAD trials, as well as direct sales and licensing online from our web portal.

See the NeuroCAD Pitch Video for more information about the technology, product, and business model.