Gen 3 Human AI

Using ORBAI's Bidirectional Interleaved Complementary Hierarchical Neural Networks (BICHNNs), constructed by our compact genome to full connectome expansions, we can efficiently perform genetic algorithms to specialize them into being optimal visual, speech, sensory, and even motion control cortices. Another novel behavior exhibited by these loops is that when properly set up and trained, when all inputs are turned off, they still hold internal state and continue to operate, meaning they have memory and logic. They don’t yet dream of electric sheep, but this connectome structure can be evolved to do cognition or planning and give us a frontal cortex capable of complex decision making. In this manner, we can evolve most of the components we need to make an actual functional brain.

With that, we can assemble all the components we made into an artificial brain that would be functionally much more like a human brain, but we now architect it and evolve its macrostructure to best perform with the SNN technology, tools and processes used. An airplane does not flap feathered wings to fly, and actually works better with smooth aluminum skin and propellers. Similarly, an AI brain does not need all the exact characteristics of a human brain, just the right ones to do the job we can use artificial evolution to choose what to keep, what to substitute, and what to eliminate to build the artificial brain that optimally does the tasks we set it to.

Now how do we train this AI brain (with its collection of autoencoder cortices and frontal cortex decision making) to be human? We cannot transfer or copy a human consciousness from a biological brain to a synthetic one, as they will always be utterly incompatible, despite our best efforts, but we don’t need to transfer a person’s mind to our AI brain; we just need it to act the same as (or mimic) that person. We apply training data from performance capture of a specific human, including speech, textual correspondence and even body and facial motion capture to our AI brain to make it see, hear, talk, act and move a 3D body (or robot) like that person, becoming a digital mimic of them. Then we can evolve and scale these brains within their ‘bodies’, using their senses and outputs to interact with the user, scoring them during interactions and evolving them so they are better capable of speaking with us fluently and learning by observation, experience and practice, just like us.

Now we will have human mimic AIs that (when we add traditional computational, database, and deep learning capabilities for a specific job), will be narrow AIs, adept at a sufficient variety of localized tasks to function as a super-human AI employee doing a specific job. This is an important intermediate step to creating something with human-level or superhuman general intelligence and also has obvious immediate commercial value, putting these vocational AIs to work in customer service, information jobs, and even as AI assistants to high paying jobs like doctors, attorneys, financial analysts, and administrators.