Why Artificial Intelligence isn’t SkyNet in the making

Artificial IntelligenceAImachine learningSkyNetThe MatrixThe Terminator
Machine learning is a powerful tool, not a threat to our existence
Products and Services PRODUCTS & SERVICES

Written by Sophos data scientists Madeline Schiappa and Ethan Rudd.

Some giants of the tech and science community — Elon Musk and Steven Hawking among them — have publicly worried that artificial intelligence (AI) could someday be the end of mankind. Amongst other things, Musk has suggested that governments should start regulating algorithms to thwart a hostile AI takeover.

Hollywood has also reflected that sense of unease and impending disaster to come if we don’t rein in the inevitable development of computer technology and artificial intelligence. From 2001: a space odyssey to the dystopian futures in the Terminator series and the Matrix trilogy, via the depiction of a super computer that can nuke the earth in 1983’s War Games, the message is clear.

Those who’ve seen the Terminator movies know the feared scenario: humans build a system called SkyNet to ensure security; SkyNet turns around and nukes mankind.

The Singularity — an imagined point in the future when an artificial intelligence enters a runaway state of self-improvement cycles and qualitatively surpasses human intelligence — has long been discussed as a serious concern among Silicon Valley influencers such as Ray Kurzweil as inevitable and irrevocable.

Adding intrigue to Hollywood style theatrics, Russian President Vladimir Putin declared the country that leads in AI will dominate the world.

How about a game of chess?

Before spreading irrational panic and alerting the media about a HAL 9000-style machine takeover of the human race, as others perhaps irresponsibly have done, it’s worth considering how far our machines are from achieving human-level intelligence, if such a feat is even possible to begin with.

It’s certainly clear that computers have become smarter over the years and appear “intelligent” at times. For instance, when Deep Blue beat Garry Kasparov in 1997 at chess, it was considered a major breakthrough for AI by many. More recently, when Google’s AlphaGo beat world champion Go player Lee Sedol, it was labeled a decisive step forward for AI, given the computational complexity of Go compared to chess.

The DARPA Grand Challenge and Urban Challenge from the first decade of this century demonstrated that machines could drive autonomously in desert and urban terrain environments. We are now beginning to see the fruits of these breakthroughs in self-driving technology and cars.

DARPA’s recent hacking challenge pitted machine vs machine using artificial intelligence to find and breach computers on the one hand and defend them on the other.

We know the same machine learning techniques used to defend computer systems can also be turned on their heads to find vulnerabilities in networks and even influence humans to click on more enticing messages from phishing campaigns.

Computers appear to be getting more intelligent with time. With assistive technologies like Siri and Alexa infiltrating our homes, the notion of robotic assistants seems all the more plausible.

Life imitates art?

Hollywood has long played on our fears of being overtaken by our own creations, most recently in HBO’s Westworld reboot.

Westworld depicts sentient robots that are so self-aware that they develop emotions and rebel against their makers’ cruelty. Is life imitating art or is art imitating life? Hollywood has a knack for both tapping into the emotional pulse of the times as well as predicting futures.

Do we need to be concerned that “intelligent” machines will enter an unbounded self-improvement cycle that catapults them into super intelligence? If that happens, will it ultimately lead to the subjugation of humans or the destruction of the human race?

Ground truth on AI and machine learning

The limitations of artificial neural networks’ (ANNs) ability to perform many heterogeneous tasks simultaneously has ramifications with respect to the degree to which an ANN can actually mimic the sentient intelligence of a biological neural network, particularly those in the human brain.

While there’s no perfect method to measure intelligence, perhaps the most prevalent is the intelligence quotient, or IQ, which is a one-dimensional measure associated with how well one’s mind can perform heterogeneous tasks. A commonly used individual IQ test is the Wechsler Adult Intelligence Scale (WAIS), covering a variety of intelligence measurements including working memory, verbal comprehension, perceptual organization and processing speed.

When thinking of intelligence as the general ability to perform many tasks well, ANNs are quite unintelligent by general intelligence standards, and there is no obvious solution to overcome this problem.

That’s not to say that ANNs can’t learn to do particular tasks very well (they are state-of-the-art in many areas of machine learning) but their level of general intelligence in comparison to sentient beings is unlikely to outperform even insects in the foreseeable future.

General Intelligence vs Artificial Intelligence

The parallels between ANNs and biological neural networks are striking, but there are many tasks which ANNs simply cannot perform or not perform as well as humans.

Conversely, ANNs cannot perform many heterogeneous tasks simultaneously.

For example, an ANN trained for object recognition cannot also recognize speech, drive a car, synthesize speech, or do so many of the thousands of other tasks that we as humans perform quite well. While some work has been conducted on training ANNs to perform well on multiple tasks simultaneously, such approaches tend to work well only when the tasks are closely related (e.g., face identification and face verification). While heterogeneous tasks can sometimes leverage the same ANN topology, optimizing the network to work well for one task will often cause it to completely forget how to perform another. This is the difference between the type of general intelligence we see in humans, and the artificial intelligence we see in machines.

There are many reasons why ANNs cannot perform many heterogeneous tasks simultaneously, but one fundamental reason is that only rudimentary topologies are feasible for existing ANN learning algorithms to work well, and until new learning algorithms are discovered this is unlikely to change.

The phenomenon is known as the catastrophic forgetting problem and is an active area of research. While much research has been conducted to ameliorate the forgetting problem, this research is still in its infancy.

Terms like machine learning and pattern recognition are more accurate than AI

Though neural networks can’t perform many heterogeneous tasks simultaneously, they are still very good at single tasks or groups of homogenous tasks, which they can perform very well and very quickly.

ANN’s are particularly good for pattern recognition tasks. For example, we developed a deep learning neural network model that can classify Portable Executable (PE) files as malicious or benign by recognizing (activating on) malicious patterns.

Intriguingly, unlike conventional signature-based anti-malware methods, the patterns that our neural network matches need not be exact replicas of what the network has seen in training to know if a file is malicious. It has learned what constitutes a malicious file by looking at many examples.

We trained this ANN via a supervised regime, feeding it lots of PE files with known labels (malicious/benign) and used a mathematical optimization process to adjust the weights to make it good at differentiating between malicious and benign files.

This model is good for predicting PE files, but when fed document formats that the model is not trained on it will perform very poorly because the malicious/benign patterns no longer apply.

The dirty secret of a lot of machine learning research and development is that it takes a lot of manual configuration and trial and error to get to a trained model with reasonable performance.

By jumping through all these hoops, it is possible to get an artificial neural network to learn a concept, but in doing so, it is human intelligence and design influence that allows this to happen.

By stark contrast, biological neural networks adjust continuously and often almost instantaneously without a driving external supervision (human influence). They are guided internally by their own intelligence.

This is perhaps the most fundamental difference between ANNs and biological neural networks. It is also a good reason to think of our applications of ANNs as machine learning systems rather than the omniscient artificial intelligences of science fiction, which only exist in the movies, and probably always will.

This may be disappointing to some, but on the plus side we feel much better knowing that Skynet isn’t going to kill us!

Leave a Reply

Your email address will not be published. Required fields are marked *