- Integrated Systems
- About Us
- Integrated Systems
- About Us
Malicious Intelligence: The AI Antithesis
The Rise of Malicious Intelligence
Born out of the degradation of AI-powered devices, malicious intelligence has the capacity to be a real threat to the modern-day business ecosystem. It’s true, there are plenty of AI applications that play a useful and critical role but focusing on the benefits of AI while forgoing the dangers is unwise.
We can’t say we haven’t been warned. The best and brightest continue to make bleak predictions about AI-usage and the dangers of ignoring the threats that the technology poses.
Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”
Stephen Hawking said: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” Hawking went on to say that ignoring the dangers of AI “would be a mistake, and potentially our worst mistake ever."
While Bill Gates said: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
Of course, much of the literature surrounding artificial intelligence is concentrated on the fabulous future of tomorrow. After all, it’s a much nicer thought that new technology will ease our burdens rather than create new ones.
But enterprise organisations from all industries should be aware of the risks present today. They might not, yet, be at the level of world-ending supercomputers, but enterprises are already experiencing the initial effects of AI that’s being used to bolster cyber-attacks.
Forms of Malicious Intelligence
While an all-powerful, super-intelligent machine taking over the world remains a conversation for film-producers and science-fiction writers, it’s not to say that an AI-powered device in the wrong hands doesn’t have an enormous potential for mischief and danger.
In fact, there are two particular approaches to malicious intelligence that have begun to make waves within the cybersecurity arena – AI-powered malware and smart phishing.
The world of cybersecurity has always been akin to an arms race, where malicious hackers and ethical hackers take it in turns finding vulnerabilities to either exploit or patch. But imagine for one second that instead of playing against another human, ethical hackers are instead facing a super-intelligent computer that learns from every mistake it makes. Sure, you’d back the human for the first few games of cybersecurity chess, but as soon as the machine starts winning. It isn’t going to stop.
AI and machine learning-powered malware is no longer a theoretical scenario. IBM scientists at Black Hat demonstrated a proof-of-concept of a highly-targeted and evasive attack tool powered by AI. It works in much the same way as regular malware, attempting to propagate by becoming part of the network, but the delivery is much more complex. DeepLocker – as it’s known – moves around the IT network undetected until it reaches a specific victim, before unleashing its malicious payload.
What’s most concerning about this attack vector is that, by conventional methods, it’s virtually undetectable. Computer malware experts will find it extremely difficult to figure out what class of target it is looking for, and, because the key conditions needed to unlock the attack can be one of several attributes, the ability to reverse engineer the network and understand what has happened and what the target was is practically impossible.
First there was phishing, then there was spear-phishing, now the latest evolution has arrived: smart phishing. Adhering to much the same rules as its predecessors, smart phishing looks to manipulate targets into undertaking a specific action – clicking a link or sharing information etc – except it’s driven not by social engineering but by advanced forms of AI-powered machine learning.
Much like AI-powered malware, this isn’t a hypothetical scenario that will likely not come to fruition, instead, two data scientists from the security firm ZeroFOX have already demonstrated the ability of the AI technology and how dangerous it can really be. They taught an AI to study the behaviour of social network users, and then design and implement its own phishing bait. They then pitted the AI against a human competitor – Forbes Staff Writer Thomas Fox-Brewster – to see who could phish the most victims.
The results were undeniable. Not only did the AI produce over six times as many tweets in the designated two-hour time frame, but the conversion rate was markedly higher as well.
The Dark Side of AI
Both the IBM and ZeroFOX technologies must operate as a stark reminder of the threats that AI technology can deliver to businesses and the applicability of malicious intelligence in the modern era. With the emergence of AI tools, the bad guys can now perpetrate advanced cyberattacks, en masse, at the click of a button causing untold damage and repercussions to enterprise organisations.
As great as AI can be at defending us from cyberattacks, it can also open to the door to ever more difficult to detect and harmful attacks. As a society, we need to be responsible about our application of ‘smart’ products and understand the risks behind their deployment before doing so. If not, then we quickly start losing the cat-and-mouse hacking game we have entered into with the cybercriminals and the predictions made by many technology experts may just come true.
Artificial intelligence is just one of the key topics being discussed at the world’s premier financial services event – Sibos 2019. HPE will be exhibiting at the event in London at stand Z141. To learn more about how we are helping leading institutions within the financial services industry accelerate business outcomes, improve customer experiences, and drive operational efficiency – visit us at Sibos 2019.
Artifical Intelligence Chief Technologist, Hewlett Packard Labs, Hewlett Packard Enterprise