The latest hype coming from science journalism, and to some degree some corners of the scientific community, is the threat posed by artificial intelligence popularly known as AI. This, of course in nothing new. As early as the late nineteenth century humanity has expressed concerns that man made machines would threaten the existence of humanity. Often the fear was that machines would take away jobs from people or eliminate the need for people altogether. I believe that the threat is not from man-made machines acquiring some sort of super-intelligence, rather it is from the fear of our own intellects manifested in those machines
The early machines were mostly mechanical in nature as are many robotic systems today with the exception that today’s robots are run by sophisticated computer software. In the twenty-first century the perceived threat by some, including Stephen Hawking and Max Tegmark, will come from the creation of “super-intelligent” machines or computers. It’s important at this point to make a clear distinction among the several forms of machine based threats, especially those that we may characterize as having super-intelligence.
First there are purely mechanical machines. Everything from a simple lever to bulldozers to robots or automobiles may be included in this category, while recognizing that there are mechanical machines that are controlled by computer programs. It is the computer programming that leads many to believe that machines may develop a “mind” of their own and thus pose a threat to humanity. You have seen this theme played over and over in many SciFi movies from Terminator to The Matrix. Machines that are controlled physically by human beings without sophisticated computer programs lack the capacity to form a “mind” of their own and are in a subcategory, which does not pose the same kind of threat.
Next there are machines that we call computers that store and process all kinds of information. In addition there are computers that control other machines as noted above and there are computers that control other computers. Computers were first designed to perform tasks that often took human beings a very long time to perform. Computations that would take a human being days, weeks, months, or years to perform were reduced to seconds, milliseconds or nano seconds by computers.
As these computers acquired more and more computational power and thus the ability to process and store more and more information from their human programmers, including the ability to “learn” and self monitor, people began to ask what would happen if the computers began to use the information programmed into them to gain the ability to create their own programs independent of a human programmer. An example often cited by those who worry about such things is a computer designed to collect paper clips. At some point it is imagined that this computer develops a sense of purpose, which is to collect all the paper clips possible regardless of the consequences. It commandeers all the world’s energy and resources to perform its task ultimately to the demise of all of humanity. In some ways this example is laughable, but it is used to make the point that a computer system need not have malevolent intent to negatively impact humanity. However, I think science fiction movies offer better examples, e. g. I. Robot and The Matrix, along with a host of others, including the most recent, Transcendence.
There is yet another category, which is largely ignored by those concerned about super-intelligent machines taking over the world. This category is a result of “emergence”, which I have explored here in this blog in the past. Simply stated emergence is a new and unexpected result, such as complexity, stemming from constituent parts that are nothing like the resulting complexity. There is a hierarchy in science that describes emergence: atom to molecules-chemistry-to biochemistry-to life-to intelligence. A great example is water. Two atoms, one of hydrogen and one of oxygen combine to form the molecule of water with properties nothing like the two constituent atoms. The whole is indeed greater than the sum of its parts. A more potent example is human consciousness, which emerges from the biochemical and electrochemical properties of the brain.
In a like manner, a very sophisticated computer could develop an emergent property totally different from that for which it was programmed. Interestingly, science fiction explored this possibility in a Star Trek, The Next Generation episode appropriately titled, Emergence. In this episode the computer that controlled all of the systems aboard the starship Enterprise developed an entirely new emergent life form. It was not another computer. It did not pose a threat to the ship or have a “super-intelligence”, it was simply “different”. Once the emergent life form had fully emerged it left the ship and disappeared into the vast expanse of the universe. It is this, “different” quality that really describes emergence.
In a 1972 landmark paper, More Is Different, P. W. Anderson described the process of emergence. In the paper Anderson proposes that emergence may be understood in terms of the breaking of symmetry, which he describes as happening in solid state physics when a metal becomes a superconductor at extremely low temperatures. This is a complicated business and I won’t describe it further here except to say that symmetry breaking often leads to the creation of complexity at a level that is awe inspiring if not down right miraculous. Click on the link above for a more detailed explanation of symmetry.
There are those, however, that do not believe the immediate threat of AI will result from emergence. To quote prominent UC Berkeley computer scientist Professor Stuart Russell, “The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer.”
There are those, myself included, that believe the concern for a possibly malevolent “super-intelligent” computer or machine is born of the fear that any “intelligence” will assume the same qualities as its creators and hence seek to conquer and destroy whether intentionally or not. It is the same fear that allows us to cast alien visitors as evil conquerors bent on the destruction of humanity. Even Stephen Hawking has succumbed to this mode of thinking.
There was a time when such thinking was relegated to science fiction and pseudo-science. It has not only gained in popularity, but serious and well respected scientists and institutions are sounding the alarm that the threat from a super-intelligent machine is real. Four groups have gained prominence in the field of AI risk analysis: Center for the Study of Existential Risks, The Future of Humanity Institute, Machine Intelligence Research Institute and The Future of Life.
I firmly believe that any “intelligence” that may someday result from sophisticated man made machines with CPU’s modeled after human neuro-processors, will have an “emergent” quality or property which we cannot even imagine. I think the Star Trek episode mentioned earlier will prove be particularly prescient.
I hope that we do not head down the path where the next great risk to humanity takes on the same global urgency of climate change, where we spend billions of dollars and political capital on a mission that is analogous to fear of our own shadows. In this case the shadow being our obsession with human intelligence.
Human intelligence is but one manifestation of the complexity of the universe. The universe is incomprehensibly vast, wondrous, and full of great mystery and things we have yet to discover. Instead of our intellect leading us to fear of some imagined future super-intelligence, it should lead us to humility. And for that,
To God Be The Glory Forever and ever