NeuroCAD Technology

ORBAI develops revolutionary AI technology based on our proprietary and patent pending time-domain neural network architecture for computer vision, sensory, speech, navigation, planning, and motor control technologies that we license for applications in AI, robotics, drones, cars, appliances and many other consumer applications.

 

 

ORBAI's Bidirectional Interconnected Complementary Hierarchical Neural Networks are a novel, patented AI architecture using two interleaved spiking neural networks that work together to train each other to perform tasks in a way similar to how the human sensory cortices do. As a result, they train by simply sensing and interacting with the real world, learning from experience, context, and practice, not requiring canned training data nor explicit training sessions. NeuroCAD is the tool-suite and UI for designing, testing, scaling and evolving these advanced network architectures.

 

We prototype and test this technology in our humanoid hologram and android creations because driving the sensory system, AI, and motion control of humanoids takes far more advanced, powerful, and flexible neural networks architectures than what today's DL technologies can deliver, requiring dynamic spiking neural network technology like BICHNN that has rich time-domain behavior, making it ideal for motor control and sensory applications like vision, hearing and touch, that can dynamically learn to sense the environment and control these artificial creations so that they move and act like real people, and have realistic facial expressions and lip synch to go with their intelligent, interactive speech. This is THE killer app for AI.

 

 

What most people practicing DL (what we call 'AI' today) don't realize is that today's 'neural networks' and 'neurons' in deep learning are just the simplest subset of a much larger and richer family of synthetic neurons, neural networks and methods that have been developed going back as far as 1955. Most of the layered neural networks and CNNs we use in DL today fall into a smaller family called Feed-Forward Neural Networks, which simply sum the weighted inputs at each node, apply a simple transfer function, and pass the result to the next layer. This is not an accurate model of how the brain works by any means, and even RNNs and reinforcement learning are not giving us true artificial intelligence, but just fitting the parameters of very large and complex functions to large amounts of data, and using statistics to find patterns and make decisions. No matter how you structure these feed-forward and recurrent neural networks, nor how large you make them, you will really never be able to truly build something that shows artificial intelligence, that can sense in multiple ways, integrate that information, and learn and reason like humans, from our everyday environment. The limit is that the 'neural' nodes that are the basic components of deep learning architectures just do not have enough computational capability or flexibility.

The methods at the top of this diagram, Spiking Neural Networks, give a more accurate model of how real neurons operate, with simple, compute-efficient models like 'Integrate and Fire', and 'Izhikevich' to more complex models like 'Hodgkin-Huxley' that come close to modeling a biological neuron's behavior, and modeling how networks of them interact and work in the brain, opening up much richer neural computational models.

In real neurons, time-domain signal pulses travel along the dendrites, and then arrive at the neuronal body independently, and are integrated in time and space inside it (some excite, some inhibit). When the neuronal body is triggered, it produces a time-dependent set of pulses down its axon, that split up as it branches and take time to travel out to the synapses, which themselves exhibit a non-linear, delayed, time-dependent integration as the chemical neurotransmitter signal passes across the synapse to eventually trigger a signal in the post-synaptic dendrite. There is a strengthening of the synapse, or learning, in the process, if the neurons on both sides of it fire together within a certain interval, called Hebbian learning. We may never be able to completely replicate all the electrochemical processes of a real biological neuron in hardware or software, but we can search for models that are sophisticated enough to represent much of the useful behavior needed in our spiking artificial neural networks.

 

 

This will bring us closer to more human-like AI, as real brains get much of their computing, sensory processing and body-controlling capability from the fact that signals ‘travel’ through neurons, axons, synapses, and dendrites, and thus travel through the brain structures, in complex, time-dependent circuits that can even have feedback loops to make circuits like timers or oscillators, or neural circuits that activate in a repeatable, cascading pattern to send specific time-dependent patterns of controlling signals to groups of muscles/actuators. These networks also learn by directly strengthening connections between neurons that repeatedly fire, called Hebbian learning. For doing more complex AI and decision making, they are much more powerful than the CNNs, static RNN’s and even deep reinforcement learning that we use in our above examples.

 

But there is one huge drawback - to date there have not been any methods for fitting these kinds of networks to data to ‘train’ them. There is no back-propagation, nor gradient descent operations that tune the synaptic weights between neurons. The synapses just strengthen or weaken, and so the spiking neural network learns as it goes about its business of operating, using Hebbian learning, which may or may not work in practice to train our synthetic networks, as they have to be structured correctly in the first place for this to work. This is an area of ongoing research, and a breakthrough in this area could be very significant. Below are my ideas, from the ORBAI provisional patent(US 62687179, filed Jun 19, 2018):

 

I will first describe an approximation to how we best understand that the visual cortex works, with not only images from our retinas being processed into more high-level and abstract patterns and eventually ‘thoughts’ as they move deeper through the different, higher levels of the visuals cortex in the brain (similar to the classic CNN model), but also with thoughts cascading the other direction through the visual cortex, becoming features, and eventually images on the lowest levels of the cortex, where they resemble the images on our retinas. Just pause for a minute, close your eyes, and picture a ‘fire truck’… see it works, you can visualize, and perhaps even draw a fire truck, and in doing so, you just used your visual cortex in reverse. CNNs cannot do that, but because our visual cortex works like this, we are always visualizing what we expect to see, and constantly comparing that with what we are actually seeing, at all levels of the visual cortex. In general, neuroscientists have found that sensing is a dynamic, interactive process, not a static feed-forward one.

This describes a method for training a artificial neural (either spiking, or feed-forward) network where there are two networks that are intertwined and complementary to one another, with one transmitting signals in one direction, say from the sensory input, up through a hierarchical neural structure to more abstract levels to eventually classify the signals. There is also a complementary network interleaved with it that has signals that flow in the opposite direction, say from abstract to concrete, and from classification to sensory stimulus. The signals or connection strength in these two networks can be compared at the different levels of the network and the differences used as a ‘training’ signal to strengthen network connections where the differences are smaller and correlation tighter, and to weaken network connections where the differences are larger and not as tightly correlated. The signals can be repeatedly bounced back and forth off the highest and lowest levels to set up a training loop? We call these Bidirectional Interleaved Complementary Hierarchical Networks.

 

If this works well for synthetic neural networks, the result could be profound, and we can now ‘train’ these neural networks, while they operate, in-situ, in real-time, on real world data they have never seen before (think 'Chappie'). This is far more dynamic, useful, and powerful than back-propagation and gradient descent during dedicated training for CNNs, and this wraps the functionality of (self-training) CNNs, GANs, and even Autoencoders into a single, more elegant architecture (which is expected if we are moving from special purpose feed forward networks to a more functional, robust, and brain-like network and neurons). With a bit of ‘retrofitting’, perhaps this technique could be used on a standard feed-forward CNN with an inverse, feedback CNN interleaved to train it as well.

 

Going back to GANs, they are close cousins with these interleaved complementary networks, because these networks are inverses of each other, and each train one another. The difference is that GANs are loosely coupled with specific interface points, but the dynamic training networks can be very densely connected and as tightly coupled as you wish. This is huge, because one of the largest difficulties with GANs is determining the feedback method and signals to send between the generator / discriminator, and for other data types than simple images, it can get complicated. The dynamic training method can work for any arbitrary system - vision, hearing, speech, motor control,.... because the training and communication method is built in to the system, already adapted to the network architecture. Just like human neural systems, these are very multi-functional, multipurpose, elegant, and very powerful.

 

Another problem with spiking neural nets is how do you connect the neurons in the first place? Sure, we can train the networks and strengthen/weaken synapses once we get rolling, but how do you even figure out how to construct them in the first place and wire them together? I'm not going to go into enormous detail, but we start with small hand-wired networks, use genetic algorithms to explore their design space, training with the above technique towards simple performance metrics, then assign a gene to each subnet that works well and start replicating them into bigger networks, again use genetic algorithms to shuffle the genes (and the subnets), train against more complex performance metrics, and keep iterating, hierarchically building larger networks at each step and assigning 'genes' to each construct at each level. Since the entire human brain develops from the information encoded in 8000 genes into a 100 billion neuron, 100 trillion synapse structure, it seems that this hierarchical encoding would be the only way to do this in a natural or synthetic neural system and we just have to iron out the details of mapping neural structures to artificial genes. In the provisional patent mentioned above, I call this collection of methods (and others) NeuroCAD.

 

One other drawback to implementing and scaling spiking neural network is that while they are capable of sparse computation (neurons and synapses only compute when a signal passes through them) and could be very low-power, most hardware we can run them on today like CPUs and GPUs compute constantly, updating the model for every neuron and synapse every time-slice (there may be workarounds). Many (large and small) companies are working on neuromorphic computing hardware that more closely matches the behavior of spiking neurons and neural networks in function and is able to compute sparsely, but it is difficult to provide enough flexibility in the neural model and to interconnect, scale, and train these networks at present, especially at network sizes large enough and organized properly to do useful work. We can probably emulate these networks on GPU clusters and build them up to a certain size and complexity, but a human brain equivalent would require over 100 billion neurons and 100 trillion synapses, which requires a massive supercomputer, and for deployment hardware is far beyond the density of any 2D chip fabrication technology we have today. We will need new fabrication technologies that can lay out 3D lattices of synthetic neurons, axons, dendrites and synapses to get there. There will have to be a significant investment in new fabrication technology.

I think if we can start solving these issues, and move towards more functional neuromorphic architectures that more fully represent how the brain, nervous system, and real neurons work and learn, we can start to consolidate some of the one-of, specific Deep Learning methods used today into these more powerful and flexible architectures that handle multiple modes of functionality with more elegant designs. As well, with these models, we will open up novel forms of neural computation and we will be able to apply them to tasks like computer vision, robot motor control, hearing, speech, and even cognition that is much more human-brain-like.

But will more sophisticated neural networks like this actually work in the end? Go look in the mirror and wave at yourself - yes it CAN work, you are that proof. Can we replicate it in an artificial system as capable as you? Now there is the trillion dollar question, isn’t it?