ORBAI is developing the next generation of artificial intelligence (AI) technology. Imagine speech interfaces that converse with you fluently, and get better just by talking to you, and pick up your vocabulary and sayings, artificial vision systems that learn to see and identify objects in the world as they encounter them, and robots, drones, cars, toys, consumer electronics and homes that use all of this together to truly perceive the world around them, precisely plan and control their actions and movement, and learn as they explore and interact, and truly become artificially intelligent and much more useful.
Artificial Intelligence using today's 2nd Gen deep learning has made some grand promises the past decade, but it still falls short in many areas. Speech interfaces are still crude and cannot converse fluidly, we do not have intelligent homes, nor home robots that can clean, cook, and do laundry, nor level 5, fully autonomous self-driving cars, nor true customer serving AI, or robot or holographic humans. Today's Gen 2 deep learning has fundamental limitations and will never achieve real AI like this.
ORBAI's Trainable Spiking Neural Networks are the 3rd Gen of AI, more powerful, more flexible, more general purpose, and able to learn dynamically, in real-life situations. We develop the tools, NeuroCAD, to shape, design, train, and evolve these networks, as well as the technology to train them and deploy them with.
NeuroCAD is a software tool with a UI for designing advanced Neural Networks. It allows the user to lay out the layers of neurons, connect them up algorithmically, crossbreed and mutate networks to generate a population of similar neural nets, then run simulations on them, train them, cull the under performers, and then crossbreed the top performing designs and continue the genetic algorithms till a design emerges that meets the performance criteria set by the designer.
It will work with today's feed-forward Deep Neural Networks to design and evolve advanced AI applications, but it is really designed to work with Spiking Neural Networks, which are a more realistic simulation of how real neurons, neural networks, sensory systems, and intelligence work, with spiking signals that travel between neurons in time, and sophisticated processing and integration of these signals at neurons and synapses. They have distinct advantages over Deep Learning’s simpler feed-forward neural networks, but they are more challenging to design, train, test and work with, require more sophisticated and powerful design tools with the ability to evolve networks.
This tool is very necessary because there is no way to lay out and connect spiking neural networks by hand, or with mathematical or algorithmic methods, and no way to predict if they will work or not when building them. Designing such a network by hand and connecting millions of neurons, each with thousands of connections would literally be like throwing one billion strands of spaghetti at a wall to see how they stick. It is humanly impossible.
ORBAIs AI technology that solves many of the fundamental problems with traditional deep learning networks, like narrow functionality, needing large, labeled and formatted data sets for training, and inability to train in the real world. Our Bidirectional Interleaved Complementary Hierarchical Neural Networks used in ORBAI's AI have much more advanced spiking neuron models that simulate how time-domain signals traverse real biological neurons and synapses, and how they are processed and integrated by them, making these neural nets far more powerful, flexible, and adaptable than traditional static, deep learning ‘neural’ nodes.
By placing two of these networks together, each with signals moving in opposite directions, but interacting and providing feedback to each other, our BICHNN architecture allows these networks to self-train, just like the human sensory and motor cortexes do. We shape these more advanced spiking neural networks to their function by designing and evolving them in our NeuroCAD tool-suite, giving them the ability to process dynamic inputs and outputs and do computation on them in both time and space and do advanced vision, speech, control, and decision making.
Deep neural nets only give static evaluation, and CNNs give spatial processing, and RNNs sequential processing, each of these is a one-of ‘hack’ using a drastically oversimplified 'neural' network for a specific purpose. These new spiking neural networks wrap that all together in one architecture, and are much more flexible and powerful.
The simplest way to compare our Bidirectional Interleaved Complementary Neural Nets to current deep learning is that we can construct an autoencoder, but one that encapsulates the functionality of a CNN for spatial data, RNN for temporal data, and the ability of a GAN to train both a discriminator (encoder), and generator (decoder) against each other. You can just feed it live data, and it will ‘learn’ to encode it, and to classify or group it, like an autoencoder. If you place two of these back to back, you can train two data sets to associate to the same encoding, such as video of objects into one input, and their spoken names as the other input. Again they learn associatively, with no need for explicit labeling by hand. They can learn live, from the real world.
These can be used in vision systems, sensory systems, speech pipelines (STT, NLP, TTS), control systems for robots, drones, cars, and many other functions. NeuroCAD allows the user to design, train and evolve spiking NN to perform any arbitrary task.
So how does this compare to Tensorflow, Torch, Caffe,… ? Deep learning tool-kits like Tensorflow and Torch have specific tools for making specialized hand-scripted deep neural nets with feed-forward or recurrent neural networks, providing what is basically a bunch of singular hacks for doing narrow AI systems trained to do specific tasks by pushing volumes of data through them in repeated, compute-heavy backpropagation cycles. ORBAI's neuroCAD allows AI designers to start with one universal neural network architecture, and use automated processes to customize, refine, and evolve it to specific purposes. They train to auto-encode uni-modal input data, and train to associatively encode bi-modal data, of any kind, just by their temporal concurrence, with no pre-formatting or hand labelling necessary. These networks can also continue to train once sensory, control and planning auto-encoders are assembled, and can even learn from real-world interaction and experience, which is impossible with deep learning neural nets right now.
By using these more powerful and realistic neuron models, architect neural networks that are more brain-like with them, and evolving them the same design that all our human sensory cortices use to sense, interpret, abstract, and learn about the environment around us, ORBAI is able to build artificial vision, sensory, speech, planning, and control AI that can go far beyond what today’s Deep Learning can do, and make the AI in robots, drones, cars, smart appliances, and smart homes orders of magnitude more intelligent, perceptive, interactive, capable, and most of all… able to learn while they interact with people and their environment, in the same way that we humans do, learning from observation, interaction, experience, and practice, enabling existing AI products to learn and function much better, and enabling products we do not have today like useful home robots that can do chores, and truly autonomous level 5 self driving cars.