Mission

ORBAI develops revolutionary Artificial Intelligence (AI) technology based on our proprietary and patent-pending time-domain neural network architecture for computer vision, sensory, speech, navigation, planning, and motor control technologies that we license for applications in AI, to put the ‘smarts’ in robots, drones, cars, appliances, toys, homes and many other consumer applications.

 

 

ORBAI's Bidirectional Interconnected Complementary Hierarchical Neural Networks are a novel, patented AI architecture using two interleaved spiking neural networks that work together to train each other to perform tasks in a manner similar to how the human sensory cortices work. As a result, they train by simply sensing and interacting with the real world, learning from experience, context, and practice, not requiring canned training data nor explicit training sessions. NeuroCAD is the tool-suite and UI for designing, testing, scaling and evolving these advanced network architectures.

 

Imagine speech interfaces that converse with you fluidly, and get better just by talking to you, and pick up your vocabulary and sayings, artificial vision systems that learn to see and identify objects in the world as they encounter them, and robots, drones, cars, toys, consumer electronics and homes  that use all of this together to truly perceive the world around them, precisely plan and control their actions and movement, and learn as they explore and interact, and truly become artificially intelligent.


We will sell licenses for a software design toolkit called NeuroCAD to AI engineers that will allow them to construct, test, and scale these novel neural networks to do functions like vision, speech, control, and decision making and to test them and integrate them into their products like the above.  We will license both the network technology itself and pre-built network modules for integration and deployment in their products on a royalty basis, so our revenue scales with theirs. We will also build our own premier products with these tools and technologies to showcase all the high-end features and to generate additional revenue.
We will sell directly to fortune 500 customers and leverage the sales channels of our partners (providing  computer hardware and solutions) to reach a much larger number of mid-size and large customers wanting to use AI in their services and products. For academia and startups, we will provide a free license of NeuroCAD with restricted features to learn and prototype with.

 

We are currently seeking a seed round of $5M, offering ORBAI stock in return. This will allow us to hire more engineers and fund R&D to develop the AI neural net technology, NeuroCAD, and pre-built network modules so we can get it into the hands of some beta customers, who will help us round out the feature set and refine the functionality so we can do a wider release soon after.

 

The market for AI technologies, products and services is forecast to grow into the Trillions in the next 5 years, and we will have a very powerful, flexible, and unique AI technology, toolsets, and products (that are superior to today’s Deep Learning technologies) that we can supply to this exploding market

 

 

Imagine speech interfaces that learn just by talking to you, and pick up your vocabulary and sayings, artificial vision systems that learn to see and identify objects in the world as they encounter them, and robots, drones, and cars that use all of this together to truly perceive the world around it, precisely plan and control their movement, learn about it as they explore it, and truly become artificially intelligent.

 

ORBAI is developing this patent-pending BICHNN advanced neural networks that solve many of the fundamental problems with traditional deep learning networks. Our Bidirectional Interleaved Complementary Hierarchical Neural Networks use much more advanced spiking neuron models that simulate how time-domain signals enter and exit real biological neurons and synapses, and how they are processed by them, making them far more powerful, flexible, and adaptable than traditional static, Deep Learning ‘neural’ nodes. This signal processing is much more information-rich, dynamic, and suitable for sensory processing, speech, and motor control, as it all takes place in the time domain and provides a network that is a close match in form and function for these applications.

 

 

By placing two of these networks together, each with signals moving in opposite directions, but interacting and providing feedback to each other, our BICHNN architecture allows these networks to self-train, just like the human visual cortex does.

 

 

To allow us to see and perceive, our brain takes in images from the retina, sends them up through the visual cortex to process them into more abstract representations or ideas, but also has a complementary set of signals moving the opposite direction, expanding ideas and abstractions into images, representing what we anticipate, or expect to see. Try closing your eyes and picture a FIRE TRUCK. There, you can see it, and that was your visual cortex working in reverse, projecting an image of what you were thinking of. The interaction between these two signals is what focuses our attention and trains our visual system to see and recognize the world around us from everyday experience. Deep Learning CNNs for vision CANNOT do this at present, and rely on a crude approximation, using backpropagation and gradient descent to statically pre-train the network on a set of labelled images.

 

By using these more powerful and realistic neuron models, architecting neural networks that are more brain-like with them, and exploiting the same design that all our human sensory cortices use to sense, interpret, abstract, and learn about the environment around us, ORBAI is able to build artificial vision, sensory, speech, planning, and control AI that can go far beyond what today’s Deep Learning can do, and make the AI in robots, drones, cars, smart appliances, and smart homes orders of magnitude more intelligent, perceptive, interactive, capable, and most of all… able to learn while they interact with people and their environment, in the same way that we humans do, learning from observation, interaction, experience, and practice.

 

This will truly propel Artificial Intelligence to become what we all have envisioned it should be. It will give us robots that are truly our companions and help us in our daily work, interacting with us like a person, conversing with us fluidly, and understanding what we want them to do, asking questions, being able to see and sense their environment, move around safely in it, interact with all the objects in it, and being truly helpful and able to do real jobs. Working robots will be able to easily do much more advanced, varied, tasks, easily resolving ambiguity, adapting their actions to the work environment and situation as it changes, performing jobs that today’s pre-programmed industrial robots cannot.

 

Even ordinary appliances will become smart and interactive, and your home will converse with you, have a personality, know you, your habits, and what you want, and be able to assist you by operating everything in it. Self-driving cars will reach level 5 autonomy by incorporating our dynamic vision, planning, navigation, speech, and control systems, learning from everyday driving, not only teaching the fleet to drive safely from their combined, constantly accumulating experience and knowledge, but to also know you, and your driving habits, and how to help drive you safer and more efficiently where you want to go.

 

We prototype and test this technology in our humanoid hologram and android creations because driving the sensory system, AI, and motion control of humanoids takes far more advanced, powerful, and flexible neural networks architectures than what today's DL technologies can deliver, requiring dynamic spiking neural network technology like BICHNN that has rich time-domain behavior, making it ideal for motor control and sensory applications like vision, hearing and touch, that can dynamically learn to sense the environment and control these artificial creations so that they move and act like real people, and have realistic facial expressions and lip synch to go with their intelligent, interactive speech. This is THE killer app for AI.

 

With time, these advanced neural architectures will be able to power very realistic human-like 3D characters and androids, that look like us, move like us, speak and emote like us, with realistic facial expressions, lip-sync, and speech with fluid, flowing conversational capability, and even emotional perception and empathy. Someday they will have human-level cognition, understanding and intelligence, and someday, further out, they will even go beyond us. This is the goal at which we have aimed our AI technologies, because this is the single hardest thing to do in AI, to emulate a human brain and make it control a human body and face well, and if our technologies can aim towards this, and we move towards this goal, revolutionizing the other applications will naturally fall out of these efforts.

 

ORBAI has already developed two generations of Artificially Intelligent Holographic People that you can walk up to, talk to, and have them answer various questions. They were built using mostly off-the-shelf technologies for 3D graphics, animation, speech recognition, natural language processing, database query, and speech synthesis. In the Gen 1 prototype, we used Google Speech API for speech recognition, and Google DialogFlow for the AI/NLP, and Unity 3D for the 3D rendering and animation. We even sold one to Tim Draper, the Silicon Valley VC, for his lobby, so people could ask ‘Holographic Tim Draper’ common questions.

 

In the Gen 2 prototypes, we used the SoundHound Houndify API for speech and AI, and for the extensive library of plug-ins it has for services like Wikipedia, Yelp, Expedia, Open Table, and other functional plugins. We did the 3D graphics and animation with the Unreal 4 Engine. We did the Claire the Concierge, Nicole the Medical Assistant, and James the Bartender demos with this tech, and with some professionally created characters from our content team and contractors.

 

 

James our holographic life-size bartender was a hit at TechCrunch in San Francisco in Sept 2018, taking verbal drink orders and serving the crowd with his drink-mixing robot bartender (Barbotics). Claire, our British-accented, holographic, AI concierge wowed crowds at Augmented World Expo 2018, using a verbal interface that allowed them to get directions, restaurant recommendations, and check flight times, and even book hotels, cars, and do other travel planning, then she could e-mail the details to them so they could later use her directions and recommendations, and complete orders on their devices and laptops.

 

Claire and James Demo Video: https://youtu.be/4IKgZENqYzQ

Galvanize TV Interviewing ORBAI at Tech Crunch 2018: https://youtu.be/0gTs8ZFjmgI

 

These were very important experiments to try, because when we showed these life-size holographic characters at events, people saw them and immediately gravitated to them. We harvested enormous amounts of data on how people interacted with them, what they asked them, did they get a satisfactory reply, what were the failure rates on speech recognition under different background noise conditions, what queries were successful, and which were not? Most importantly, we got to test if the concept even works. Will people walk up and talk to a life-size artificial person, and do they consider it intelligent enough stay and to keep talking with? We found that the answer with today’s speech technologies is NO. Most people do not have the patience or skill to properly format their speech into staccato, very structured sentences with a DOS-like command and parameters to get todays commercial speech AI to understand them and do what they want, and give up after a couple of failed tries.

 

That groundwork provided the guidance and foundation for the inventions in the provisional patent we filed, as each one of them address one of these key weak areas we found in talking, interactive AI, and we now want to expand on these ideas and lock these inventions down in full patents so we can put up a barrier to entry for all the other companies that will need to solve these same issues (like Google and Amazon and others). We also now know where to focus our R&D efforts to maximize our impact by developing THE most critical technologies first that will have the most impact on developing better, more interactive AI. 

 

ORBAI currently has a provisional patent on 20 AI inventions in our Provisional Patent ‘Controlling 3D Characters and Androids’ filed for ORBAI by Cooley LLP with the USPTO on 6/20/2018, Docket Number: ORBA-001/00US 333534-2001. With that, here is our tech history and roadmap, from 2017 to 2022:

 

 

ORBAI’s advanced AI, using neuromorphic, brain-like. spiking neural nets with our BICHNN technology to allow it to train dynamically is in early R&D and will be debuted to the AI world in a technical session and demo at NVIDIA GTC Conference in March 2019, along with our NeuroCAD UI and tools that allow people to develop, test, and scale these advanced neural networks for their applications. We are looking for funding to finish the R&D needed to bring this technology to commercial readiness for beta licensing in robotics, drones, self-driving cars, smart appliances, smart homes, and in our own holographic 3D humanoid employees and other applications.

 

We will license our BICHNN neural network technology and seats of the NeuroCAD development software to AI developers to use in creating their own applications. They will pay an annual license per seat for the NeuroCAD software, and a licensing fee for the BICHNN tech that scales with their deployment, also providing SASS/Cloud sevices for deployment.

 

We will design, architect and train specific, custom neural nets for vision, sensory processing, cognition, control and speech with BICHNN that will be available as pre-built modules to be used within NeuroCAD in creating these applications. These will have a one-time cost, plus licensing fees that scale with deployment

 

ORBAI's plan is to develop these advanced neural architectures to their full potential, and not only license them to 3rd parties, but also invest more money to build our own robotics and AI products with them, that can work in the home, as well as fill many jobs in manufacturing, transportation, medicine, legal, agriculture, and other areas. In 5 years, we could expand the neural network capability to near-human AGI and build overseer-class AIs that can manage incredibly complex infrastructure at the corporate, municipal, state, and federal level for finance, resource management, utilities and many other areas, able to manage far more data input and do much more complex analysis than teams of people ever could.

 

What ORBAI is offering today are the first steps towards real AI, that has flexible and powerful neural networks that work more like our own brains and sensory cortices (based on decades of solid neuroscience research) that can perceive and learn from the world around them (as well as from the data we give them) and that can grow much more powerful, capable, and general purpose with time. Once fully realized, ORBAI’s BICHNN and NeuroCAD will make today’s limited, clunky DL technology look like Tinkertoys by comparison, and we could quite feasibly displace much of DL in that whole multi-Trillion AI market in the next 5 years, and afterwards, grow it far beyond that. The revenue, influence, and reputation gained from bringing these technologies, products, and services to the world could easily take ORBAI to a record-breaking IPO in 5 years and propel it to become a Trillion-dollar company in 10 years. I think it is worth a little risk to try and see if we can make the basics work this year and then run with it.

ORBAI Demo Video

If this sounds interesting to you, please reach out to  us

Brent Oster, CEO ORBAI