ORBAI Press Package for NVIDIA GTC 2019
28 Mar, 2019
ORBAI is a startup company based in Silicon Valley, doing Advanced Artificial Intelligence and Robotics. ORBAI develops revolutionary Artificial Intelligence (AI) technology based on its patent-pending time-domain neural network architecture for computer vision, sensory, speech, navigation, planning and motor control technologies that we license for applications in AI to put the intelligence in robots, drones, cars, appliances, toys, homes and many other consumer applications.
With these tools and technologies, we develop our own artificial people enabled with vision, speech, and motion, people that can interact with you, talk to you and do real jobs. Right now, we are starting with projected ‘holograms’ of 3D animated and rendered characters that will also evolve into full robotic humanoids as the technology matures.
At NVIDIA GTC on 18-21 March 2019, ORBAI is going to show a 6ft holographic, talking bartender named James that can take verbal drink orders from you and can serve you real drinks with a robotic drink mixer. On top of that, he can tell you jokes and give you wise bartender advice. We will show a 5’6” holographic concierge named Claire that can handle all sorts of verbal information queries, like travel (is my flight on time?), restaurants (Tell me a good Vietnamese restaurant near the convention center), and both of them are capable of answering a wide variety of questions about GTC, like who is exhibiting, and where are they located on the expo floor and can also provide you with latest updates about weather, traffic and other information updates.
Both Claire and James are powered by ORBAI Gen2 AI, a mix of commercial APIs like Houndify and UE4 and our custom interfaces, interaction, and animation software which are continuously being upgraded to make these artificial people smarter, more interactive, able to respond more fluidly, and to move and interact more naturally, but there are limits to what deep learning technology can do.
Demo Video: https://youtu.be/4IKgZENqYzQ
At GTC we are also going to give a sneak preview of the revolutionary new ORBAI Gen3 AI using our novel, patented Neural Network architecture and the NeuroCAD 3D UI tools that we have developed for working with them and shaping them. In the next year we are basically going to do a full brain transplant on James, from his vision to hearing, comprehension, speech, movement, and facial animation, using an architecture that is more like a human nervous system. It not only has much more advanced functionality and features, but also learns like we do, from observation, interaction with the world, with practice and experience, constantly getting better. On March 21, at 10:00am at GTC, we will do our ‘Chappie’ demo of this technology, showing a robot learning to see, hear, speak, see, and associate objects that are shown to it with their spoken names, just like the scene in the movie.
We will use our NeuroCAD Neural Network design tools to design, test, and integrate our vision, speech, and motor control networks into our Artificial People, and also be applied to other company’s applications like the neuromorphic interface for the DARPA Bionic Arm which we are developing, and for companies desiring to design and develop AI using our NeuroCAD tools and integrate our vision, speech, and motor control networks into their products and services.
NeuroCAD will be commercially available in late 2019 for a limited beta release and a wider commercial release in 2020. We will sell a per-seat annual license for the NeuroCAD 3D tools itself, and license our pre-built modules for vision, speech, sensory input, and motor control for people to integrate and customize for their own applications, including robotics, drones, self-driving cars, and smart and interactive homes, appliances, devices, and even smart, interactive toys.
ORBAI Gen3 Technology Preview
Humanoid AI is our collection of vision, hearing, comprehension, speech, and animation control systems built with our dynamic neural networks, and integrated so that they can work together and learn together to make our Artificial People more human-like to look at, interact with, and speak with.
ORBAI Neural Networks are an advancement of spiking neural networks, which work more like the human nervous system, with realistic computer models of dynamic, spiking neurons, synapses, axons, dendrites and signals that propagate in time and space, woven together into a novel, self-learning configuration we call Bidirectional Interleaved Complementary Hierarchical Neural Networks.
NeuroCAD is a software tool with a UI for designing our Dynamic Neural Networks. It allows the user to lay out the layers of spiking neurons, connect them up algorithmically, crossbreed and mutate them to generate a population of similar neural nets, then run simulations on them, train them, cull the underperformers, and then crossbreed the top performing designs and continue the genetic algorithms till a design emerges that meets the performance criteria set by the designer.
These kind of advanced tools like NeuroCAD are necessary to develop these dynamic spiking neural networks because there is no human intuition for how to construct and connect these networks. We provide a visual tool with intuitive and simple methods for people to specify the connection schemes using a parametric gene mapping. We will allow users to run the spiking neural nets on CPU, GPU, and neuromorphic processors like Intel Loihi and IBM True North.