ORBAI is developing the next generation of artificial intelligence (AI) technology. In the next few years we will commercialize speech interfaces that converse with you fluently, and get better just by talking to you, artificial vision systems that learn to see and identify objects in the world as they encounter them, and deploy these systems in our Human AI for our AI employees, and license them for use in others' robots, drones, cars, toys, consumer electronics and homes. This Human AI will use all of this functionality together to truly perceive the world around them, precisely plan and control their actions and movement, and learn as they explore and interact.
To do this, we start with spiking neural networks, train and evolve them (using our patented NeuroCAD process) into artificial vision, hearing, speech, cognition, and motor control cortices, and combine those cortices into artificial brains, our Human AI. We train these Human AI's with performance capture from a specific person, to make an AI mimic of that person with conversational and visual communication capability via a computer graphics character. When connected to vocation-specific software and databases, these become AI employees, working among us on screens large and small in many different vocations. Each of these super-human vocational AIs is a narrow deep slice, that can be assembled into a large pie when there are enough of them. That pie is the precursor for an Artificial General Intelligence, able to interact intuitively with millions of people, do many of the non-physical jobs that humans can (only better), and can evolve and scale to get better with time.
ORBAIs AI technology that solves many of the fundamental problems with traditional deep learning networks, like narrow functionality, needing large, labeled and formatted data sets for training, and inability to train in the real world. Our Bidirectional Interleaved Complementary Hierarchical Neural Networks used in ORBAI's AI have much more advanced spiking neuron models that simulate how time-domain signals traverse real biological neurons and synapses, and how they are processed and integrated by them, making these neural nets far more powerful, flexible, and adaptable than traditional static, deep learning ‘neural’ nodes.
By placing two of these networks together, each with signals moving in opposite directions, but interacting and providing feedback to each other, our BICHNN architecture allows these networks to self-train, just like the human sensory and motor cortexes do. We shape these more advanced spiking neural networks to their function by designing and evolving them in our NeuroCAD tool-suite, giving them the ability to process dynamic inputs and outputs and do computation on them in both time and space and do advanced vision, speech, control, and decision making.
We developed this new set of NeuroCAD tools and processes because there is, at present, no way to lay out and connect spiking neural networks by hand, or with mathematical or algorithmic methods, and no way to predict if they will work or not when building them, so genetic algorithms are used to find the optimal feedback and autoencoder designs, specialized to the training dataset and modality. This whole process is made tractable for large spiking neural networks by representing the network with a compact genome that is crossbred and mutated, then expanded through a deterministic and smoothly interpolating process to produce the full network connectome to train and evaluate. Previous methods crossbred and adjusted synaptic weights directly, limiting genetic algorithms to very small networks because the parameter space (of all synaptic weights) for large networks was too large to search.
Using these Bidirectional Interleaved Complementary Hierarchical Neural Networks, constructed by our compact genome to full connectome expansions, we can efficiently perform genetic algorithms to specialize them into being optimal visual, speech, sensory, and even motion control cortices. Another novel behavior exhibited by these loops is that when properly set up and trained, and all inputs are turned off, they still hold internal state, and continue to operate, meaning they have memory and logic. This capability can be evolved to do cognition or planning and give us a frontal cortex capable of complex decision making. In this manner, we can evolve most of the components we need to make an actual functional brain.
By using these more powerful and realistic neuron models, architect neural networks that are more brain-like with them, and evolving them the same design that all our human sensory cortices use to sense, interpret, abstract, and learn about the environment around us, ORBAI is able to build artificial vision, sensory, speech, planning, and control AI that can go far beyond what today’s Deep Learning can do, and make the AI in robots, drones, cars, smart appliances, and smart homes orders of magnitude more intelligent, perceptive, interactive, capable, and most of all… able to learn while they interact with people and their environment, in the same way that we humans do, learning from observation, interaction, experience, and practice. This enables existing AI products to learn and function much better, and enabling products we do not have today like useful home robots that can do chores, and truly autonomous level 5 self driving cars. We can construct AI that mimics humans and allows them to do a variety of real jobs, and we can later bring all their collective knowledge and skills together into an AGI in the next 6-8 years.
Brent Oster, CEO ORBAI