Developing an Artificial General Intelligence
What we usually think of as Artificial Intelligence today, when we see human-like robots and holograms in our fiction, talking and acting like real people and having human-level or even superhuman intelligence and capabilities - is actually called Artificial General Intelligence (AGI), and it does NOT exist anywhere on earth yet. What we do have is called Deep Learning, that has fundamental limitations that will not allow it to become AGI, and despite all their advances, CNNs, RNNs, Reinforcement Learning, and other AI techniques in use today are just cogs and clockwork - sophisticated, but special purpose, and very limited.
For an AI to pass the threshold of human intelligence, and become an artificial general intelligence requires an AI to have the ability to see, hear, and experience its environment. It needs to be able to learn that environment, to organize it’s memory non-locally and store abstract concepts in a distributed architecture so it can model it’s environment, and people in it. It needs to be able speak conversationally and interact verbally like a human, and be able to understand the experiences, events, and concepts behind the words and sentences of language so it can compose language at a human level. It needs to be able to solve all the problems that a human can, using flexible memory recall, analogy, metaphor, imagination, intuition, logic and deduction from sparse information. It needs to be able to do the tasks and jobs humans can and express the results in human language in order to be able to do those tasks and professions as well as or better than a human.
Here is a video that goes into these requirements, where deep learning falls short today, and a high-level overview of our planned approach:
For the technical details, refer to:
An Artificial General Intelligence will quickly become the most powerful tool that humanity has ever had, making the revolutions spawned by electricity, computers, and the internet pale by comparison. Where we are now overwhelmed by a world's worth of information we could never humanly assimilate, we will have something that can for go through it all for us, and give us exactly what we need, when we need it, to look into the future and show decision makers timelines resulting from key decisions.
Companies could use it to plan their corporate strategy by having it watch and learn their company’s internal operations, gather data about their whole ecosystem of customers, suppliers, partners, competitors, and the other market factors, then forecast different timelines and how they evolve into the future differently according to their decisions, allowing you to optimize their corporate decision-making and plan effective prescient timelines for product development, marketing/PR, sales, finance, legal,… into the future.
Ordinary people can use it to plan their lives, evaluating different decisions they could make, from career, to marriage, to finances, and to see the trajectory of events that would result, and the probable outcomes. They can access AGI professional services like Finance, Legal, Medical, Counselor, and hundreds of others, and these services would be available globally.
What we get at the apex of this process is a Strong AGI or Superintelligence - a powerful tool beyond anything civilization has ever seen. It would have the ability to act and converse like a human - but with a billion people at once, and to fill almost every human professional job in those interactions, but have a mind that is much more vast than ours, able to see all the information from its interactions and data feeds, and to analyze it, with superhuman cognition, to see patterns in data spanning the globe and spanning decades in time, to make decisions based on it, and to plot selected events into the future, making it an Oracle from which to seek knowledge and forecasts, for corporations, nations, or individuals. To us, it would become everything.
The processes and technologies for our BICHNN neural networks and NeuroCAD tools are covered by: Utility Patent US # 16/437,838 + PTC
Also filed 11 June, 2019 in US, 14 Dec 2020 in China