Neural Networks

ORBAI Bidirectional Interleaved Complementary Hierarchical Neural Nets – Technical Supplement

ORBAI is developing our patent-pending BICHNN advanced neural networks that solve many of the fundamental problems with traditional deep learning networks. Our Bidirectional Interleaved Complementary Hierarchical Neural Networks use much more advanced spiking neuron models that simulate how time-domain signals enter and exit real biological neurons and synapses, and how they are processed by them, making them far more powerful, flexible, and adaptable than traditional static, Deep Learning ‘neural’ nodes. This signal processing is much more information-rich, dynamic, and suitable for sensory processing, speech, and motor control, as it all takes place in the time domain and provides a network that is a close match in form and function for these applications.

By placing two of these networks together, each with signals moving in opposite directions, but interacting and providing feedback to each other, our BICHNN architecture allows these networks to self-train, just like the human visual cortex does.

To allow us to see and perceive, our brain takes in images from the retina, sends them up through the visual cortex to process them into more abstract representations or ideas, but also has a complementary set of signals moving the opposite direction, expanding ideas and abstractions into images, representing what we anticipate, or expect to see. Try closing your eyes and picture a FIRE TRUCK. There, you can see it, and that was your visual cortex working in reverse, projecting an image of what you were thinking of. The interaction between these two signals is what focuses our attention and trains our visual system to see and recognize the world around us from everyday experience. Deep Learning CNNs for vision CANNOT do this at present, and rely on a crude approximation, using backpropagation and gradient descent to statically pre-train the network on a set of labelled images.

By using these more powerful and realistic neuron models, architecting neural networks that are more brain-like with them, and exploiting the same design that all our human sensory cortices use to sense, interpret, abstract, and learn about the environment around us, ORBAI is able to build artificial vision, sensory, speech, planning, and control AI that can go far beyond what today’s Deep Learning can do, and make the AI in robots, drones, cars, smart appliances, and smart homes orders of magnitude more intelligent, perceptive, interactive, capable, and most of all… able to learn while they interact with people and their environment, in the same way that we humans do, learning from observation, interaction, experience, and practice.

This will truly propel Artificial Intelligence to become what we all have envisioned it should be. It will give us robots that are truly our companions and help us in our daily work, interacting with us like a person, conversing with us fluidly, and understanding what we want them to do, asking questions, being able to see and sense their environment, move around safely in it, interact with all the objects in it, and being truly helpful and able to do real jobs. Working robots will be able to easily do much more advanced, varied, tasks, easily resolving ambiguity, adapting their actions to the work environment and situation as it changes, performing jobs that today’s pre-programmed industrial robots cannot.

Even ordinary appliances will become smart and interactive, and your home will converse with you, have a personality, know you, your habits, and what you want, and be able to assist you by operating everything in it. Self-driving cars will reach level 5 autonomy by incorporating our dynamic vision, planning, navigation, speech, and control systems, learning from everyday driving, not only teaching the fleet to drive safely from their combined, constantly accumulating experience and knowledge, but to also know you, and your driving habits, and how to help drive you safer and more efficiently where you want to go.

With time, these advanced neural architectures will be able to power human-like 3D characters and androids, that look like us, move like us, speak and emote like us, with realistic facial expressions, lip-sync, and speech with fluid, flowing conversational capability, and even emotional perception and empathy. Someday they will have human-level cognition, understanding and intelligence, and someday, further out, they will even go beyond us. This is the goal at which we have aimed our AI technologies, because this is the single hardest thing to do in AI, to emulate a human brain and make it control a human body and face well, and if our technologies can aim towards this, and we move towards this goal, revolutionizing the other applications will naturally fall out of these efforts.