ORBAI Gen3 AI

ORBAI develops revolutionary Artificial Intelligence (AI) technology based on our proprietary and patent-pending time-domain neural network architecture for computer vision, sensory, speech, navigation, planning, and motor control technologies that we license for applications in AI, to put the ‘smarts’ in robots, drones, cars, appliances, toys, homes and many other consumer applications.

Imagine speech interfaces that converse with you fluently, and get better just by talking to you, and pick up your vocabulary and sayings, artificial vision systems that learn to see and identify objects in the world as they encounter them, and robots, drones, cars, toys, consumer electronics and homes that use all of this together to truly perceive the world around them, precisely plan and control their actions and movement, and learn as they explore and interact, and truly become artificially intelligent and much more useful.

ORBAI's Bidirectional Interconnected Complementary Hierarchical Neural Networks are a novel, patented AI architecture using two interleaved spiking neural networks that work together to train each other to perform tasks in a manner similar to how the human sensory cortices operate and learn. As a result, they train by simply sensing and interacting with the real world, learning from experience, context, and practice, not requiring canned training data nor explicit training sessions.

ORBAI's Humanoid AI is our flagship, using our neural net technology to power our artificial people, giving them vision, hearing, speech, and lifelike animation so they can interact with you and be able to work real jobs, like Claire the Concierge, and James the Bartender, and Ada the Greeter. By using near-photorealistc computer graphics for our 3D animated characters, we can project them life-size at 4K onto holographic screens that makes them look like they are standing right in front of you, suspended in mid air. This same technology can also extend directly to humanoid robots that can interact just as realistically once the robotic hardware matures.

link: ORBAI Demo Video

We prototype and test this technology in our humanoid hologram and android creations because driving the sensory system, AI, and motion control of humanoids takes far more advanced, powerful, and flexible neural networks architectures than what today's DL technologies can deliver, making it also ideal for motor control and sensory applications like vision, hearing and touch in these and other robotic, drone, auto, smart appliance, and smart home applications.

We will license these core technologies and sell annual, per-seat licenses for a software design toolkit called NeuroCAD to AI engineers that will allow them to construct, test, and scale these novel neural networks to do functions like vision, speech, control, and decision making and to test them and integrate them into their products like the above. We will license both the network technology itself and pre-built network modules for integration and deployment in their products on a royalty basis, so our revenue scales with theirs. We will also build our own premier products with these tools and technologies to showcase all the high-end features and to generate additional revenue.

ORBAIs AI technology that solves many of the fundamental problems with traditional deep learning networks. Our Bidirectional Interleaved Complementary Hierarchical Neural Networks used in ORBAI's AI have much more advanced spiking neuron models that simulate how time-domain signals enter and exit real biological neurons and synapses, and how they are processed by them, making them far more powerful, flexible, and adaptable than traditional static, Deep Learning ‘neural’ nodes. This signal processing is much more information-rich, dynamic, and suitable for sensory processing, speech, and motor control, as it all takes place in the time domain and provides a network that is a close match in form and function for these applications.

Feedback Between Two Complementary Networks

By placing two of these networks together, each with signals moving in opposite directions, but interacting and providing feedback to each other, our BICHNN architecture allows these networks to self-train, just like the human visual cortex does.

To allow us to see and perceive, our brain takes in images from the retina, sends them up through the visual cortex to process them into more abstract representations or ideas, but also has a complementary set of signals moving the opposite direction, expanding ideas and abstractions into images, representing what we anticipate, or expect to see. Try closing your eyes and picture a FIRE TRUCK. There, you can see it, and that was your visual cortex working in reverse, projecting an image of what you were thinking of. The interaction between these two signals is what focuses our attention and trains our visual system to see and recognize the world around us from everyday experience. Deep Learning CNNs for vision CANNOT do this at present, and rely on a crude approximation, using backpropagation and gradient descent to statically pre-train the network on a set of labelled images.

By using these more powerful and realistic neuron models, architecting neural networks that are more brain-like with them, and exploiting the same design that all our human sensory cortices use to sense, interpret, abstract, and learn about the environment around us, ORBAI is able to build artificial vision, sensory, speech, planning, and control AI that can go far beyond what today’s Deep Learning can do, and make the AI in robots, drones, cars, smart appliances, and smart homes orders of magnitude more intelligent, perceptive, interactive, capable, and most of all… able to learn while they interact with people and their environment, in the same way that we humans do, learning from observation, interaction, experience, and practice.

This will truly propel Artificial Intelligence to become what we all have envisioned it should be. It will give us robots that are truly our companions and help us in our daily work, interacting with us like a person, conversing with us fluidly, and understanding what we want them to do, asking questions, being able to see and sense their environment, move around safely in it, interact with all the objects in it, and being truly helpful and able to do real jobs. Working robots will be able to easily do much more advanced, varied, tasks, easily resolving ambiguity, adapting their actions to the work environment and situation as it changes, performing jobs that today’s pre-programmed industrial robots cannot.

Even ordinary appliances will become smart and interactive, and your home will converse with you, have a personality, know you, your habits, and what you want, and be able to assist you by operating everything in it. Self-driving cars will reach level 5 autonomy by incorporating our dynamic vision, planning, navigation, speech, and control systems, learning from everyday driving, not only teaching the fleet to drive safely from their combined, constantly accumulating experience and knowledge, but to also know you, and your driving habits, and how to help drive you safer and more efficiently where you want to go.

With time, these advanced neural architectures will be able to power very realistic human-like 3D characters and androids, that look like us, move like us, speak and emote like us, with realistic facial expressions, lip-sync, and speech with fluid, flowing conversational capability, and even emotional perception and empathy. Someday they will have human-level cognition, understanding and intelligence, and someday, further out, they will even go beyond us towards AGI. This is the goal at which we have aimed our AI technologies, because this is the single hardest thing to do in AI, to emulate a human brain and make it control a human body and face well, and if our technologies can aim towards this, and we move towards this goal, revolutionizing the other applications will naturally fall out of these efforts.

FAQ on ORBAI Spiking Neural Networks:

In our tech, we are using more advanced spiking neural networks that have the ability to process dynamic inputs and outputs and do computation on them in both time and space. Where deep neural nets only give static evaluation, and CNNs give spatial processing, and RNNs sequential processing, each of these is a one-of ‘hack’ using a drastically oversimplified neural network for a specific purpose. These new spiking neural networks wrap that all together in one architecture, and are much more flexible and powerful.

Here is a more high-level article about spiking (neuromorphic) neural nets, and why the author thinks they are the next, or 3rd generation of AI:

https://towardsdatascience.com/spiking-neural-networks-the-next-generation-of-machine-learning-84e167f4eb2b

If you want a deep read on spiking NN, this is 2017 a survey paper citing over 2000 papers in the field by some of the top researchers, and summarizing their findings: https://arxiv.org/pdf/1705.06963.pdf

Intel has spent a significant R&D budget on their Loihi neuromorphic chip in the past few years, as did IBM on their True North neuromorphic chip, and many papers have been written about them. Many other startups and larger companies have too. BUT, in all of these articles, research, and work on spiking neural nets, everyone laments, to date, that they have NOT figured out a way to train Spiking NN to do useful tasks. That is the precise subject of the patent ORBAI filed with Cooley’s Bill Galliani in June 2019. Methods for training and evolving spiking neural networks to optimally perform tasks.

The simplest way to compare our Bidirectional Interleaved Complementary Neural Nets to current deep learning is that we can construct an autoencoder, but one that encapsulates the functionality of a CNN for spatial data, RNN for temporal data, and the ability of a GAN to train both a discriminator (encoder), and generator (decoder) against each other. You can just feed it live data, and it will ‘learn’ to encode it, and to classify or group it, like an autoencoder. If you place two of these back to back, you can train two data sets to associate to the same encoding, such as video of objects into one input, and their spoken names as the other input. Again they learn associatively, with no need for explicit labelling by hand. They can learn live, from the real world.

These can be used in vision systems, sensory systems, speech pipelines (STT, NLP, TTS), control systems for robots, drones, cars, and many other functions. NeuroCAD allows the user to design, train and evolve spiking NN to perform any arbitrary task.

So how does this compare to Tensorflow, Torch, Caffe,… ? It will replace all of them within a few years, as it uses more flexible powerful and general spiking neural network technology and has the tools to train and specilaiize these networks to do all of the specific functions that deep learning toolkits likeTensorflow and Torch now have a bunch of singular hacks for.