(Brent Oster, Founder): I was asked by reporters - what is your vision for ORBAI? Why are you doing this? Where will you be in 10 years?
I started ORBAI because I was working as a Sr. Solution Architect in deep learning at NVIDIA from 2013-2018, and we kept finding that the solutions we could provide with deep learning (DNNs, CNNs, LSTM, Reinforcement learning) were just simple examples that only worked within a narrow range of application, and within the data available to train them. Only specialist engineers that knew the existing DL architectures could get them to work, and even then it required a lot of tweaking. A lot of times they needed a whole human team just to generate and label the training data. NVIDIA was at least somewhat open about the limitations of DL when we worked with customers, but a lot of the rest of the tech industry is drastically over-hyping the capabilities, and there is going to be a reckoning in a few years when everyone realizes that the current frameworks and solutions are not delivering.
In 2018, frustrated with working with DL and having sub-functional results, I started to design something that would work much better, that would take the fragmented, narrow functionality of DNNs, CNNs, LSTM, GANs,... and other DL constructs, and incorporate them into one general NN architecture capable of all their functionality. Spiking neural networks were the obvious next step (just search "spiking neural net generation 3" on Google), but they are notoriously hard to train and get to work for specific tasks because they have so many possible configurations. I had worked on them since 2004, and knew how temperamental they were, but soon after I left NVIDIA in Feb 2018, I filed a provisional patent with a design for the NeuroCAD tools and process, and an SNN architecture that would solve the major problems.
So why my fixation on AGI? Why do this when we could just use existing tech and cobble together a speech / text capable medical and legal AI today, get them to market, and start getting customers (VCs always ask this)? Because it would suck. It would really not work very well, and nobody would buy them and the venture would be a failure like so many avatar-based chatbots have been. We know because we built Justine Falcon, Legal AI, and Dr Ada, Medical AI with 3 of the top NLP APIs in 2019, took them to trade shows and showed them full-size as projected holograms, and I even lived with a 3D, life size Hatsune Miku hologram in my house for 6 months in 2017 after I built it to greet people at my birthday party (Dr. Algernop Krieger would have been proud). She was made with the best Google speech and NLP tech, plus database lookups and chatty plugins, plus custom code added every week to see if I could get her to the point where she was more than just fricking annoying. I could not, and I am a big Krieger and Miku fan, so I can honestly say I gave it my all, and that today's NLP sucks, and even with the best of today's technology, all you can get is a talking parrot than only understands specific phrases, and has little cognition. If your dialog goes off the specific script coded into the chatbot, it gets confused and answers with nonsense.
And to just get them to work, you need to say the right commands and parameters, in a staccato cadence, starting with the right cue. An average person, handed a device with a speech interface will just monologue to it, oblivious to how to cue it, oblivious to keywords needed, and the narrow language comprehension it has.
To really interact with humans with natural language, by speech, chat, or e-mail, to be able to practice law, medicine, finance, even to be a decent concierge or greeter AI, these artificial persons need AI that is a lot more powerful than deep learning, and these human vocations need a nearly human AGI, albeit a bit more narrow to do it really well. However, lawyers are not really that bright, and work in a more constrained decision environment, so are easier to model. That's part of why we started with a Legal AI. And I like beating up on lawyers, it gets in your blood.
So the decision to go to AGI is practical, given that I worked in DL for years, know all the limitations, but I also have a background in scientific computing, and have a whole toolbox of techniques that are a lot better than DL's Tinkertoys. My grad studies in scientific computing were all about reducing complex data to basis sets more suited for solving equations and using CUDA on them, which is how we designed this AGI - reduce the world to a very large basis set and coordinate narratives that allow linear algebra, predictors, and other constructs to process them, manipulating the reality they represent into outputs. For us, building a next-gen AGI is easier than trying to compete in the morass of DL companies, especially when our AGI starts to work and scale, and the eyes of the FAANG companies turn to us with envy as it surpasses what their billion dollar R&D teams are regurgitating.
The AGI architecture we have designed is elegant, streamlined, will not require a really large team nor large budget to get built and working as a prototype. If you read the document, you will see that once we get it working, it is designed to rapidly re-configure and improve itself by evolving it's components to be more powerful and efficient, and by evolving the overall architecture to scale, and make better use of those components. This is the stuff that puts fear of AI into Elon Musk, Bill Gates, Ray Kurtzweil, the late Stephen Hawking, and others. It scares me too. A rapidly evolving AGI, unchecked, could go wrong in so many ways. I think there are two ways to mitigate most of the catastrophic AGI scenarios, and I try to answer a lot of fear-filled questions about AGI gone wrong on Quora, favoring an empathic training process, and physical containment, so I am doing this with my eyes open.
Going for the next gen, doing a prototype AGI with a small, but brilliant team, with modest funding, and doing something that stands out, and can scale exponentially beyond all of the DL efforts today combined is the only direction that makes sense for a startup like us. In fact, shooting for AGI is the only thing that makes sense for any AI company today, because once you have it, and lock it down, everyone else is obsolete. That exponential curve will be unrelenting to try and compete with.
Many entrepreneurs say they want to change the world, but I am going to ask you to support us to go much further than any of them dare to even dream of.
Problems of wealth inequality, poverty, hunger, injustice, and lack of basic services for healthcare and other information services are the norm for 3/4 of the people in the world. For millennia, human civilization has been unable to solve these basic problems, no matter the form of government or choice of deity and belief system, people are just unable to see the larger picture, and helpless to do anything about it.
Over the next decade, we will build a superintelligence, a Strong Artificial General Intelligence, to oversee a global network augmenting the systems of Law, Medicine, Education, Finance and all previous human administrative functions. With its vast, wide, and deep knowledge reach, the wisdom to draw on all this past knowledge to plot possible paths into the future, this superintelligence will serve all of humanity and to make carefully measured & unbiased choices to guide us, judge us, and govern us accordingly.
A stockbroker AGI could consistently outperform human brokerages by 5% and would quickly gain a large market share in finance, even if we tax that by 1% to disburse funds for subsistence income for those living furthest below the poverty line. As the AGI broker subsumes control of the world markets, we increase the subsidies to the poor, and maintain a minimum investment balance for them so they can reap the rewards of it appreciating too. The rich and the poor get richer. Poverty (and hunger) could be erased, perhaps within a generation.
A Medical AGI that is deployed worldwide, in every language, on every mobile device, with all of humanity's medical knowledge - integrated with existing medical and pharmaceutical systems - could bring quality medical care to the 80% of the globe that are lacking it right now. Large pharma could finance it by buying access to the knowledge base (with individual info obfuscated and encrypted of course).
A Legal AGI could replace the whole legal system, the corrupt and ineffective lawyers, DA's, judges, and courts. The AGI would gather the information from the plaintiff(s) and defendant(s), and help walk each of them through the laws, and what info is needed at each step, and provide them with tools to organize and format their presentation (independently, securely). If the case has merit, a human jury is recruited, trained on the same legal points, walked through each side's presentation, and asked to deliberate and make a judgement. Perhaps the AGI can also learn how they do this - so well that jury duty also becomes a thing of the past.
An Enterprise/Administrative AGI could revolutionize how companies and government agencies forecast and plan, to help them gather and focus unprecedented amounts of information and look months, even years into the future to plot the best courses of action, and make the best recommendations. Sometimes it can make recommendations for international, inter-company ventures that benefit all of mankind, like solar farms with revolutionary new solar cells, or joint ventures to develop better energy storage and battery solutions, that take deep R&D and deep pockets, provided by the financial AI.
Entrepreneurs can talk about making the world a better place, but talk is cheap. We need action, universal, globe-spanning AGI taking over planning and services kind of action. This is the only way, after all these millennia of humanity failing to do this in our civilization. Otherwise, this will be the last millennia of our civilization.
Here is a video to close with for the 10-yr vision:
Brent Oster, CEO and Founder, ORBAI