Deep Learning, Artificial General Intelligence and Whole Brain Emulation
The field of Artificial Intelligence (AI) has been the subject of intense research for more than six decades but has, so far, failed to reach its main objective, that of creating machines that are as intelligent, as flexible and as creative as human beings. Despite the many optimistic predictions that have been made in the course of these decades, this goal of designing machines (or programs) that exhibit artificial general intelligence (AGI) remains as elusive today as it was in 1958 when the New York Times published a story that started, in an overly optimistic way, with the sentence “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
In part, this is due to what is known as Moravec’s paradox: high-level symbolic reasoning, which seems hard to us, is relatively easy to program and requires very few computational resources, but low-level perceptual processing, which seems easy, is hard to program and requires extensive computation. For instance, is it easy to write a program that demonstrates mathematical theorems but it is hard to write a program that identifies the objects in an image.
The difficulties underlying this paradox made it very difficult to create systems that deal with the uncertainties of the real world and imposed serious limits on the type of systems that AI researchers have been able to create. As the decades went by, researchers and developers realized that programming the detailed behaviors required to handle real-world data is a near-impossible task, leading to several AI winters, periods when the field seemed to stagnate and the objective of developing AGI seemed unattainable.
Deep learning models can become proficient at many different tasks but can’t learn like humans
One area of AI that has gained increased relevance is machine learning, the field that is concerned with making machines learn from experience, just as humans do. Neural networks, a machine learning approach that is based on the use of perceptrons, simple units inspired in biological neurons configured to perform specific tasks, increased in popularity as time went by (they were already the topic of the aforementioned NY Times piece) but, in the 20th century, remained unable to address complex perceptual problems.
In this last decade, however, with the development of more powerful computers, extensive datasets, and new methods collectively known as deep learning, neural networks finally started to be applicable to real-world problems. Deep neural networks, trained using mathematical algorithms that maximize their performance on a given task, have demonstrated exceptional performance in a number of tasks that, until now, could not be handled by AI systems. These tasks include face or object recognition in images and videos, automatic surveillance, superhuman-level playing of board games, automatic machine translation, autonomous driving of vehicles, synthesis of realistic images and videos, analysis of legal contracts, image based checkout and billing in stores, voice-activated execution of orders by digital assistants, and medical image analysis, among many others.
Despite all these successes, we still lack a clear roadmap to AGI. Deep learning models can become proficient at many different tasks but they learn in ways that are very different from the way humans learn. Unlike humans, deep learning systems do not generalize well from small amounts of data and usually fail to translate knowledge acquired in one domain to another unrelated domain.
As our knowledge of AI and neuroscience advances, we will improve our understanding of the way intelligence results from the workings of the human brain and we will be able to better emulate similar processes in a computer.
Techniques like reinforcement learning (RL), long short-term memory (LSTM) and generative adversarial networks (GAN) extend significantly the range of applicability of deep neural networks, but it is unlikely that these and other existing techniques will be sufficient to bridge the gap that remains between today’s systems and systems that exhibit artificial general intelligence. The fact is that no one knows whether the continuous development of AI and machine learning techniques will eventually be enough to develop AGI or, on the contrary, whether AGI will forever remain an unreachable and unreasonable goal.
Non-believers argue that the enthusiasm that characterizes the field today will be followed by another AI winter, as deep learning networks fail to fulfill the high expectations they have raised. Believers, on the other hand, argue that AGI is not only possible, but within close reach, requiring only incremental improvements in existing technologies. Of course, the truth may lie somewhere in between. One thing, however, is indisputable. As our knowledge of AI and neuroscience advances, we will improve our understanding of the way intelligence results from the workings of the human brain and we will be able to better emulate similar processes in a computer.
We know that AGI is possible, at least in principle, because evolution has created it once, in humans. If everything else fails, we may be able to simulate the detailed behavior of a brain in a computer, an approach that some believe is the easiest way to reach AGI. If this technology of whole brain emulation (WBE) becomes possible, the process will also lead to a kind of virtual immortality, a prospect that is unsettling, to say the least. It may happen that, for the first time in human history, immortality will not be a prerogative of the gods, but a possibility offered by technology. Whatever the future of AI, we may be sure the next decades will bring interesting new challenges.