This site uses cookies to deliver our services and to show you relevant ads and job listings. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms & Conditions. Your use of Stack Overflow’s Products and Services, including the Stack Overflow Network, is subject to these policies and terms. I Accept

AGI

Since computers were invented, their most significant limitation is that they are unable to interpret the real world around them. They have counted on humans to format the world into data the computers can understand and operate on, and for humans to interpret the results. Even modern deep learning is subject to this limitation.

AI researchers agree that today’s narrow deep learning and machine learning is too simplistic to ever match the human brain and our broad intellectual capacity and that it will instead take more powerful and flexible artificial general intelligence (AGI) to enable human-level AI and to go beyond it. 

ORBAI is developing and patenting more general AI methods that can dynamically decompose any reality it perceives into fundamental building blocks that it can use to understand and manipulate that reality with the native mathematical language of linear algebra and computers, then reconstruct its results from the building blocks back to reality.

It does this by taking in any real-world inputs using ORBAI’s proprietary SNN Autoencoders to perform a completely unsupervised process of:

  1. Encoding any combination of 1D, 2D or 3D inputs (including vision, audio, speech, and other data) and reduce them to an encoded engram stream. 
  2. Using a hierarchical method, alternating proprietary PCA and these SNN Autoencoders to differentiate and subdivide the engrams into smaller and more fundamental components in each step.
  3. It does this until the engram is decomposed into a unique, discrete building block of reality (basis set and coordinates), creating a library of all the building blocks. 
  4. These engram basis sets and basis coordinates can be recombined to recreate the memory of the sensory data, or combined in novel ways to synthesize new memories.

Now that this information is in a native mathematical computer format (the basis coordinates are just an array of numbers), the AI can use linear algebra operations, supercomputing, and database ops. It can compute narratives and trajectories in time that model interactions with real-world objects, environments, and systems. As well, because it can form new memories, it can predict, guess, plan, and even dream, so it can solve complex problems like we do, based on sparse information. The new information in mathematical form can then be reconstructed into engrams and back into reality by running the encoding system in reverse.

Being able to predict what will happen next is a fundamental property of human intelligence, which we use for more than just problem solving, but also to perceptually screen what our senses are seeing and hearing compared with what we expect them to, enabling us to hear one person talk in a crowded room, or to look for a specific object or person in a crowd. It is also fundamental to our conversational capabilities, as we are always predicting ahead while conversing, otherwise the lag in answering would be seconds.

When we have narratives of coefficients recorded into memory, we use this as input to train a predictor, evolving a specialized SNN for looking into the future. Here we visualize it as a NeuroSquid, with the tentacles sampling multiple points on multiple narratives, and the SNN generating predicted points in the future.

The core problem solving is done by first recording narratives of these basis coefficients into memory as they evolve in time. Then the predictor is started on a narrative, and predicts into a possible future that can depart from that narrative, later join another, and keep weaving a web of imaginary narratives between the branches of the narratives of reality, essentially dreaming, but with a predictor trained on the pattern of reality. Dreams that don't conform to reality nor contribute to problem solving are attenuated over time, while those that lead to correct solutions are reinforced.

Once this web of narratives is laid down in memory by experience and by dreams, a problem can be posed as a starting point in a narrative as coordinates, and the goal as an ending point in coordinates. Any number of algorithms can be used to traverse the web between them to find the best path from start to goal, but my favorite is to use lightning leaders, branching out from the start and goal, branching and splitting till two opposite leaders connect, literally like a bolt of lightning going off.

With these ORBAI will take the first steps towards AGI that can perceive the real world, reduce those perceptions to an internal format that computers can understand, yet still plan, think and dream like a human, then convert the results back to human understandable form, and even converse fluently using human language using letters, numbers and phonemes as part of the basis set with segments of narratives describing how words, sentences, and conversations are composed. Even if this AI cannot reach human intelligence yet, it is also integrated with a supercomputer and massive database system, giving it a decided edge over humans until its general intelligence catches up.

The core SNN Autoencoders are working in the ORBAI NeuroCAD tools, and the rest of the design being patented has been refined sufficiently that a sub-human AGI prototype can be constructed much sooner than most would estimate, within 2-3 years. All of the input, memory, prediction, cognition, and planning systems are scalable with compute power, so this could rapidly advance to human AGI and beyond 3-5 years later.

ORBAI’s 90 sec video gives a more visual overview: https://youtu.be/YEw0SJg--PE

The business model is to license the development tools and a developer toolkit to our customers and 3rd party developers that work with them, then provide access to the AGI as SAAS, enabling our developer network to connect to it with data and applications for various customer needs. This would completely revolutionize planning in finance, medicine, law, administration, agriculture, enterprise, industrial controls, traffic monitoring and control, network management,... and almost any field of human endeavor where we need to predict future trends to make decisions in the present.

For a near-term application in medicine, it could model the progression and treatment of specific diseases, giving doctors a tool to plan treatment along a timeline, and to even preempt many conditions and treat them before they become acute.

In financial applications, it could track a massive number of factors that feed into the performance of specific stocks, including modeling the behavior of opposing traders and bots - and allow stockbrokers to make much better predictions of market movements.

Progress:

Utility Patent US # 16/437,838 + PTC, filed June 11, 2019

Provisional Patent US #63/138,058, filed Jan 2021 

Shows and Interviews: NVIDIA GTC, Tech Crunch, Singularity University

Featured in Ai Authority, Forbes, Dojo Live, Yahoo Finance, Digital Journal