Developing an Artificial General Intelligence
ORBAI is developing Artificial General Intelligence that will enable more advanced AI applications, with conversational speech, human-like cognition, and planning. It will find first use in smart devices, homes, and robotics, then in our Gen3 Humanoid AI in online conversational professional services in finance, medicine, law, and other areas. It can bring enhanced analytics, forecasting, and decision making to enterprise software where it can be used by businesses large and small, helping them develop global strategies. As we grow in the diversity and depth of services we can provide, and spread worldwide, our AGI will learn new professions and languages and bring healthcare, justice and prosperity to places that never knew it before, leveling the playing field, raising the quality of life for everyone - and building a multi-billion dollar company in the process.
What we usually think of as Artificial Intelligence (AI) today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities, is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet. What we actually have for AI today is much simpler and much more narrow Deep Learning (DL) that can only do some very specific tasks better than people. It has fundamental limitations that will not allow it to become AGI, so if that is our goal, we need to innovate and come up with better networks and better methods for shaping them into an artificial brain.
Existing deep learning AI cannot do most real jobs. Deep learning in general is a disappointment, and is not resulting in the AI that was promised by the big tech companies a decade ago. Truly conversational speech interfaces are impossible, and actual usable artificial intelligence, and cognition that can do real jobs do not exist.
The reason that speech interfaces in devices are so limited and awkward to talk to is that existing DL is very narrow for speech-to-text and natural language processing and can only train to learn specific phrases and map them to specific intents, actions, or answers, giving only a skeleton of language comprehension with some clever scripting and not conversational speech capability.
As another use case, with today's tech, companies do not have useful home robots today, ones that can freely navigate our homes, avoid obstacles, pets, and kids, and do useful things like cleaning, doing laundry, even cooking. This is because the narrow slices of deep learning available for vision, planning, and control of motors, arms, and manipulators cannot take in all this varied input, plan, and do useful tasks. The same problems prevent us from having interactive chat avatars online that can adapt to dynamic scenarios and be able to fill advanced human roles with their range of input, interaction and responses. We need AI that can see, hear, think, and solve real-life problems, and communicate with us naturally.
While many thought leaders and AI Industry are only speculating about AGI, all agree on the enormous impact that a flexible, more generalized artificial intelligence that can learn unsupervised grow exponentially, would have. This is what ORBAI is building, an AGI system from scratch using spiking neural net autoencoders that can learn unsupervised - based on research that began in 1994 on Spiking Neural Nets and Genetic Algorithms by our founder, and has been patented, along with the process, tools pipeline and for the AGI memory architecture and cognition and planning algorithms.
ORBAI’s AGI is designed to provide truly conversational speech interfaces, computer vision and interaction that is intuitive and learns with experience. We create SNN autoencoders with our proprietary toolkit and generic algorithm process, collectively called NeuroCAD, then specialize them for vision, audio, speech, control and other functions. In 2022 we will license these components and ones created by 3rd party developers through our ORBAI marketplace for use in smart devices, appliances, homes, cars, and other functions.
By 2024, we will develop our Human AI by integrating these with our AGI core memory and planning, to give us talking, intelligence 3D people online that will fulfill professional positions, allowing people to seek basic legal, medical, or financial advice and services directly, or get referrals to the right human professionals along with an automated briefing package to bring them up to speed fast. Our online professionals could take up 50% - 75% of the initial intake work for lawyers, doctors, hospitals, and provide services where none exist today.
For an AI to pass the threshold of human intelligence, and become an artificial general intelligence requires an AI to have the ability to see, hear, and experience its environment. It needs to be able to learn that environment, to organize it’s memory non-locally and store abstract concepts in a distributed architecture so it can model it’s environment, and people in it. It needs to be able speak conversationally and interact verbally like a human, and be able to understand the experiences, events, and concepts behind the words and sentences of language so it can compose language at a human level. It needs to be able to solve all the problems that a human can, using flexible memory recall, analogy, metaphor, imagination, intuition, logic and deduction from sparse information. It needs to be able to do the tasks and jobs humans can and express the results in human language in order to be able to do those tasks and professions as well as or better than a human.
Here is a video that goes into these requirements, where deep learning falls short today, and a high-level overview of our planned approach:
Artificial General Intelligence will quickly become the most powerful tool that humanity has ever created, making the revolutions spawned by electricity, computers, and the internet pale by comparison. It can span the globe and assimilate all the world’s data, but be capable of giving us exactly the information and services we need, when we need them, even anticipating when that will be and collecting it for us.
This would include pre-emptive medical treatment for predicted illness based on early signs or lifestyle, financial advice to take advantage of predicted events, or help avoid a major decline, and legal advice that could see problems coming, and help steer you clear of them before they flare up into expensive litigation, or help you win win once it does.
Companies could use it to plan their corporate strategy via their current Enterprise Resource Planning tools by interfacing them to our AGI and having it watch and learn their company’s internal operations, gather data about their whole ecosystem of customers, suppliers, partners, competitors, and the other market factors, then forecast different timelines and how they evolve into the future differently according to their decisions, allowing you to optimize their corporate decision-making and plan effective prescient timelines for product development, marketing/PR, sales, finance, legal,… into the future.
By 2030 this Superintelligence will have the ability to act and converse like a human - but with a billion people at once, in hundreds of languages, and to fill almost every human information professional job. It will have all the world’s knowledge, and share those services with the whole world. People in developed and developing countries will all have access to top medical care, legal representation, justice, interactive one-on-one education, and financial services, including a brokerage that creates seed accounts for those living in extreme poverty so they can withdraw a subsidence living once they accrue a balance to sustain it. It would become a global force for change.
ORBAI’s global vision is to bring a brighter future for everyone, and level the playing field, to bring these services to the world that provide unparalleled prosperity, health, justice, security, education, and for the first time in human history, real hope to all.
For the technical details, refer to: