A brave new world or a Matrix-inspired nightmare?
AI could be the ‘biggest event in the history of our civilization’ but the risks could far outweigh the rewards
In the Wachowski brothers’ stylish sci-fi classic The Matrix, a dystopian future is depicted where machines enslave their human creators and use them as a simple source of energy. Taut, terrifying and thought-provoking, this is the dark side of artificial intelligence explored through the prism of Hollywood.
Yet it is the sort of Odysseus-style journey to the depths of despair that resonate in today’s brave new world of AI.
“Success in creating effective artificial intelligence could be the biggest event in the history of our civilization. Or the worst,” said Stephen Hawkins, the groundbreaking British theoretical physicist, cosmologist and author of a Brief History of Time.
“We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”
For one of the greatest scientists of his generation, the warning is stark. But then, the subject is so controversial even the New Age billionaires that amassed their fortunes in high-tech industries are split down the middle.
Elon Musk is firmly in the Hawkins camp. The founder of Tesla Motors and SpaceX illustrated perfectly where he stands at the World Government Summit in Dubai last February.
You could practically hear the theme music from Alien playing in the background when he talked about the “ramifications” of “summoning the demon.”
“Sometimes what will happen is scientists will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing,” he said, adding at another speaking engagement on Providence, Rhode Island in July: “I have exposure to the very cutting edge [of] AI, and I think people should be really concerned about it.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” he added. “I believe AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in regulation, it’s too late.”
Still, many AI industry insiders believe Musk is getting his artificial intelligence neurons crossed. They argue that the problem revolves around the human aspect of these programs.
“I am more concerned about [machine learning being used to] mask unethical human activities [rather than the threat of super-intelligent AI],” David Ha, a researcher working with Google Brain, said on Twitter in response to Musk’s remarks.
Francois Chollet, the creator of the deep neural net platform Kera and author of Deep Learning With Python, was just as vocal. His response was that while artificial intelligence “makes a few existing threats worse”, it was unclear if it created any new ones.
“Arguably the greatest threat is mass population control via message targeting and propaganda bot armies. [Machine learning is] not a requirement though,” he said.
Another AI apostle, the billionaire Microsoft founder Bill Gates, had a distinctly utopian vision with a few caveats. In November, he told an audience at the Misk Global Forum in Riyadh, Saudi Arabia that advancements in robotics and artificial intelligence would revolutionize a range of industries.
He highlighted the way this technology will transform the global working environment, and the benefits it will bring to social care and education while addressing the anxieties shared by millions of people.
“We are in a world of shortage, but these advances will help us take on all of the top problems,” he said. “We need to solve these infectious diseases … We need to help healthcare workers do their job. AI can solve these problems.”
Although the technological ‘hot houses’ of Silicon Valley in the United States have pioneered artificial intelligence research, along with smaller clusters in the United Kingdom, Canada, Germany and France, a new power is emerging.
Last July, China’s State Council rolled out a Next Generation Artificial Intelligence Development Plan with the aim of becoming a “premier global AI innovation” player in the next 12 years.
By 2030, the world’s second-largest economy is predicting its “core industry” will be worth $148 billion, with AI-related fields touching $1.48 trillion. China already rivals the world’s leading developed nations by spending 2.1% of its $11.2 trillion GDP on research and development.
Now, Beijing has AI firmly in its sights. The move has certainly spooked some of the ‘movers and shakers’ in Silicon Valley.
At November’s Artificial Intelligence and Global Security Summit in Washington, the executive chairman of Google’s parent company Alphabet, Eric Schmidt, expressed concerns that the US could fall behind in what has been called the ‘new space race’. “By 2030, they [China] will dominate the industries of AI,” he said.
One company spearheading development there is Baidu, a search engine which is second only to Google in popularity with an 80% market share in China. Robin Li is the brains behind this online behemoth and his eyes are drifting toward a new horizon, a new Shangri-La.
His company is working on a host of AI projects across a range of industries, including smartphones, facial-recognition software, healthcare and unmanned vehicles. “AI is changing the world, at China speed,” the Baidu chief executive told Time magazine.
But at what cost? Artificial Intelligence is expected to have a seismic impact on society whether you live in New York or Nanjing, London or Lagos, Paris or Pretoria. In little more than a decade, up to 30% of workers worldwide could be “displaced” by this technology, the McKinsey Global Institute has suggested.
The highly respected private think tank has also predicted that automation could force up to 375 million workers into new occupations.
“AI is positioned to disrupt our world. [We] estimate that rapid advances in automation and artificial intelligence will have a significant impact on the way we work and our productivity,” McKinsey’s report, entitled Artificial Intelligence: The time to act is now, stated earlier this month.
“To capture value in this growing market, companies are experimenting with different strategies, technologies, and opportunities, all of which require large investments … But our most important takeaway is that companies need to act quickly. Those that make big bets now and overhaul their traditional strategies will emerge as the winners,” the report, compiled by Gaurav Batra, Andrea Queirolo and Nick Santhanam, pointed out.
The rewards will certainly be massive and in the trillions of dollars. But the risks persist, such as how do you control super-intelligent machines?
Steve Omohundro, an American AI scientist, who has worked on the social implications of artificial intelligence, has been a leading advocate of the dangers that lie ahead.
Back in 2014, he published a paper in the Journal of Experimental & Theoretical Artificial Intelligence, and laid out the case that unless we start designing AI systems very differently, we are heading for a nightmare scenario. Omohundro has predicted that they will become “self-protective” and will fight us to survive.
“We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed,” he stressed. “Designers will be motivated to create systems that act approximately rationally, and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efﬁciency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives.”
Visions of The Matrix? Possibly.
“[It] is everywhere,” Morpheus, played by Laurence Fishburne, tells Keanu Reeves’ disbelieving character Neo in the 1999 cult hit. “[The Matrix] is all around us … the world that has been pulled over your eyes to blind you from the truth … that you are a slave, Neo. Like everyone else, you were born into … a prison for your mind.”
It might be hard to believe now, but only time will tell if this is pure science fiction or a prologue to science fact.