AGI

Developing an Artificial General Intelligence

What we usually think of as Artificial Intelligence today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities - is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet. What we do have is called Deep Learning, that has fundamental limitations that will not allow it to become AGI, and despite all their advances, CNNs, RNNs, Reinforcement Learning, and other AI techniques in use today are just cogs and clockwork - sophisticated, but special purpose, and very limited.

For an AI to pass the threshold of human intelligence, and become an artificial general intelligence requires an AI to have the ability to see, hear, and experience its environment. It needs to be able to learn that environment, to organize it’s memory non-locally and store abstract concepts in a distributed architecture so it can model it’s environment, and people in it. It needs to be able speak conversationally and interact verbally like a human, and be able to understand the experiences, events, and concepts behind the words and sentences of language so it can compose language at a human level. It needs to be able to solve all the problems that a human can, using flexible memory recall, analogy, metaphor, imagination, intuition, logic and deduction from sparse information. It needs to be able to do the tasks and jobs humans can and express the results in human language in order to be able to do those tasks and professions as well as or better than a human.

Here is a video that goes into these requirements, where deep learning falls short today, and a high-level overview of our planned approach:

Summary - How is ORBAI building an AGI?

To build an AGI, we need better neural networks, with more powerful processing in both space and time and the ability to form complex circuits with them, including feedback. With these, we can build bidirectional neural network autoencoders that take sensory input data and encode it into compact engrams with the unique input data, keeping the common data in the autoencoder. This allows us to process all the sensory inputs – vision, speech, and many others into consolidated, usable chunks of data called engrams, stored to short-term memory. For this we developed our BICHNN neural networks and NeuroCAD tools.

These networks and the tools for evolving them were based on the work of neuroscientists working in sensory perception, such as Miguel Nicolelis, who found that the senses have bidirectional neural signals that not only process the information in the forward direction to encode the information from our sensory organs into compact engrams, but to also feed signals back from our brains to our senses, so we can visualize it (close your eyes and picture a fire truck), This feedback system serves several purposes: comparing the feedback signal to the sensory input to screen that input so we can have selective focus and search for specific objects in a scene or listen to a specific person in a crowded room; so we can imagine and visualize scenes and plan; and at the core, it serves to train us to to see and hear, and builds up our library of engrams that encode our perceptions of the world.  In Humans these engrams are stored in the hippocampus short term memory, to be used in planning, visualization, and for selective focus.

It also encodes language (spoken and written) along with the input information, turning language into a skeleton embedded in the HFM engrams, used to reference the data with, to mold it with, with the HFM give structure and meaning to the language.

This is all based on the work of prominent neuroscientist, Eleanor Maguire, who states that the reason for memory in the brain is not to recall an accurate record of the past, but to predict the future and reconstruct the past from the scenes and events we experienced, using the same stored information and process in the brain that we use to look into the future to predict what will happen, or to plan what to do. Therefore the underlying storage of human memories must be structured in an abstracted representation in such a way that memories can be reconstructed from some for the purpose at hand, be it reconstructing the past, predicting the future, planning, or imagining stories and narratives – all hallmarks of human intelligence.

Now to store it to long term memory, we process a set of input engrams to reside in a multi-layered, hierarchical, fragmented long-term memory. First we sort the engrams into clusters based on the most important information axis, then autoencode those clusters further with bidirectional networks to create engrams that highlight the next most important information, and so on. At each layer, the bidirectional autoencoder is like a sieve, straining out the common data or features in the cluster, leaving the unique identifying information in each engram, allowing them to then be sorted on the next most important identifying information. Our AI basically divides the world it perceives by distinguishing features, getting more specific as it goes down each level, with the lowest level engram containing the key for how to reconstruct the engram from the features stored in the hierarchy. This leaves it with a differential, non-local, distributed, Hierarchical Fragmented Memory (HFM), containing an abstracted model of the world, similar to how human memory is thought to work.

When our AI wants to reconstruct a memory (or create a prediction), it works from the bottom up, using language or other keys to select the elements it wants to propagate upwards, re-creating scenes, events, and people, or creating imagined events and people from the fragments by controlling how it traverses upwards. It is this foundation that all of the rest of our design is based on, as once we can re-create past events and imagine new events, we have the ability to predict the future, and plan possible scenarios, doing cognition and problem solving.

For the technical details, refer to:

ORBAI AGI Provisional Patent

and

ORBAI AGI Eta Tech Video

An Artificial General Intelligence will quickly become the most powerful tool that humanity has ever had, making the revolutions spawned by electricity, computers, and the internet pale by comparison. Where we are now overwhelmed by a world's worth of information we could never humanly assimilate, we will have something that can for go through it all for us, and give us exactly what we need, when we need it.

Companies could use it to plan their corporate strategy by having it watch and learn their company’s internal operations, gather data about their whole ecosystem of customers, suppliers, partners, competitors, and the other market factors, then forecast different timelines and how they evolve into the future differently according to their decisions, allowing you to optimize their corporate decision-making and plan effective prescient timelines for product development, marketing/PR, sales, finance, legal,… into the future.

Ordinary people can use it to plan their lives, evaluating different decisions they could make, from career, to marriage, to finances, and to see the trajectory of events that would result, and the probable outcomes. They can access AGI professional services like Finance, Legal, Medical, Counselor, Fitness Advisor, and hundreds of others, and these services would be available globally.

What we get at the apex of this process is a Strong AGI or Superintelligence - a powerful tool beyond anything civilization has ever seen. It would have the ability to act and converse like a human - but with a billion people at once, and to fill almost every human professional job in those interactions, but have a mind that is much more vast than ours, able to see all the information from its interactions and data feeds, and to analyze it, with superhuman cognition, to see patterns in data spanning the globe and spanning decades in time, to make decisions based on it, and to plot selected events into the future, making it an Oracle from which to seek knowledge and forecasts, for corporations, nations, or individuals. To us, it would become everything.

Brent Oster

CEO, ORBAI

 

The processes and technologies for our BICHNN neural networks and NeuroCAD tools are covered by:  Utility Patent US # 16/437,838 + PTC

Also filed 11 June, 2019 in US, 14 Dec 2020 in China