This week, two events perfectly demonstrated humanity’s almost schizophrenic relationship with artificial intelligence. First, Terminator: Genisys, with threat of Skynet looming over our heads, premiered on the big screen. Second, a consortium of tech companies and AI researchers has just revealed 37 research grants to keep AI from going out of control. But how far or near are we really to the dark future painted by Terminator or any other AI-centric science fiction? Turns out, we’re probably nowhere near. But the real danger might not actually come from AI. It might be from mankind after all.
There is no question that artificial intelligence has progressed by leaps and bounds. We have seen its principles applied to various products ranging from the mundane like search engines, to the sensational like Siri, to the absurd, like an AI that can make Maro level just by watching YouTube videos. But while these examples demonstrate almost frightening intelligence, they are still a far cry from the genocidal AI of fiction. Those do not merely respond to user input, though they are portrayed to have started like that. They act on their own volition and on their own terms, attributes of a property that we call self-awareness or consciousness.
Intelligence is not consciousness, or at least they are distinct when it comes to computers. Artificial Consciousness is an even more theoretical field than Artificial Intelligence. While there is very little doubt now that AI is real, there is still ongoing debate on whether AC is possible at all. Being able to act with autonomy, without having to rely on input, from humans or otherwise, are too complex to be replicated in software, some argue. And consciousness is essential to the Skynet of Terminator’s future. Only a machine that is aware of its existence and is able to derive autonomy from that can become the threat to humanity that science fiction often portrays. Without it, those machines are nothing but highly intelligent and efficient tools in the villain’s hands. Artificial Consciousness touches on almost philosophical topics of free will and ethics, the latter of which we’ll see come up again later, and, fortunately or unfortunately, we’re still not there yet.
Another seemingly forgotten aspect of the AI antagonists of fiction is their will to persist, a self-preservation instinct that also ties in with self-awareness. AI is, ultimately, just software and any software, even the most malicious kind, can ultimately be erased. In fiction, turning off the switch is usually the trigger that sends the AI into a killing mood. It begins to think that it is better than man or that it can better mankind by annihilating it. Perhaps, these conclusion can be arrived at using pure logic, but the ability to dwell on it and act on it again goes back to having consciousness.
But even with those motives, without the desire to survive, any AI could simply be destroyed. It would not protect itself. It would not be able to judge that its existence is a necessity that trumps humanity’s. Of course, something akin to that can be programmed as easily as saying “a = 5”, but that would boil down to one thing: the programmer’s ethical use of artificial intelligence.
The real threat: Man
Again, science fiction usually narrates how man created the AI monster that would almost wipe it out of existence, but it isn’t the act of creating AI itself that would send man’s history on a downward spiral. It would be the use of such artificial intelligence that is the more imminent danger. Forget Skynet or the Matrix. AI in the hands of man is already enough to wipe out mankind when it comes down to it. In short, it could be the irresponsible, not to mention unethical, use of artificial intelligence that could eventually lead to a future as dark or even darker than Terminator.
That is, to some extent, the mission of that aforementioned consortium, the Future of Life Institute. These isn’t something like a pro-human, anti-AI group, though of course they will always be working towards the betterment of humanity. Members of the Institute include companies and employees that work on Artificial Intelligence, like Google, Microsoft, and, surprise surprise, Elon Musk. They don’t discourage the development of AI. Instead, they encourage its responsible development and ethical use. AI reaches beyond our browsers or smartphones. There are numerous projects that put AI inside robots as well as weapons. Future of Life seeks to drive the AI field into a righteous path, so to speak, developed for the good society. It wants to avoid the ethical pitfalls of a dark AI future and nip the possibility of Skynet right at the bud.
It is perhaps too soon to start worrying about Skynet happening, even with Musk’s swarm of Internet-bearing satellites in our future. The days of a self-conscious, autonomous AI with an instinct for self-preservation is still a distant future. But the future of AI being used either with little regard for safety or with malice might not be that far away. The killer AI of the future would be created by man and that much is true. The only way to prevent that from happening is by teaching man how to use technology responsibly. Judging by today’s standards, however, that might be a mission as complicated, or perhaps even more, as developing AI itself.