As AI Becomes More Ever Capable, Will It End Up Helping, Or Hindering, The Hackers? – Forbes

Originally published on Forbes’ website by Ed Stacey, Managing Partner.

Hacking events have increasingly been in the news this year, as a range of serious ransomware and supply chain hacks have wrecked chaos on businesses and infrastructure. The latest (as of July 2021) is a supply-chain-ransomware attack against Miami-based software firm Kaseya, affecting 1500 of its customers – with the hackers (threat-actors) demanding $70 million in cryptocurrency to release the data. According to the World Economic Forum, cyber-attacks now stand side by side with climate change and natural disasters as one of the most pressing threats to humanity.

No doubt ways will eventually be found to detect and pre-empt these latest styles of attack. The cybersecurity industry is defined by continual, if largely gradual, innovation – as new threats emerge, technology that protects, detects and responds to the attacks also emerges. This cat and mouse dynamic has been a fundamental trait of the industry to date: a permanently iterating relationship that supercharges the development of new technologies on both sides, where even a small edge over adversaries can pay dividends (or ransoms).

So it’s worth considering how developments in AI will play into this dynamic, especially as it evolves to become exponentially more powerful.

There are two main methods that hackers use in order to attack or degrade their targets: by 1) exploiting bugs in software systems, and 2) by covertly obtaining authorisation credentials, often by tricking authorised users into revealing or using them (such as running malware).

Unfortunately, there are innumerable bugs in today’s software systems, many of which are ‘zero-day exploits’ i.e. bugs that can be used by hackers but are not yet known by the software’s users or vendors, who therefore are very unlikely to have adequate cyber-defences that can directly prevent them being used for exploits. AI can be used to automate the process of discovering these software bugs as well as the ‘best’ ways to exploit them, without being detected. DeepLocker, for example, uses a neural network to covertly identify a target machine, including software environment, geolocation, and even audio and video streams. However, some of the most powerful AI methods today are based on software agents using Reinforcement Learning (RL) which, as well as detecting the initial exploits, can also learn the optimum way to exploit them in the target to gain maximum impact without detection.

Of course, such agents could be used during the software development and testing phases to reduce the number of software bugs in the first place, but it seems unlikely they would be used long enough to detect them all – on balance, these RL agents will probably favour the attackers.

In fact, there is a form of AI that can eliminate software bugs almost entirely. These advanced software assurance methods use ‘model-based software engineering’ techniques, in turn using mathematical methods of ‘formal verification’ (being pioneered by startups such as Imandra.ai), and are now starting to be adopted in critical systems for finance and defence, although are not yet mainstream. Realistically though, due to the huge volume of legacy software code that is unlikely to ever get rewritten, zero-day exploits are likely to be around for a long time.

The second way that AI can be used by hackers is to obtain authorisations and credentials they shouldn’t have – for example, passwords. PassGAN uses a GAN style of neural network to learn the statistical distribution of passwords from password leaks and generates high-quality guesses. However, this attack fundamentally relies on poor cybersecurity practice, in permitting ‘memorable’ passwords to be used, rather than representing a fundamental cybersecurity challenge. For example, the discriminator of PassGAN itself could be employed to prevent typical human-style passwords from being used in the first place.

A much more serious threat is the ‘phishing’ attack, whereby hackers trick users into revealing their credentials or otherwise persuade them to take an action designed to benefit the hackers (such as installing malware). In particular, AI can make ‘spear phishing’ attacks much more practical, in which individuals are precisely targeted using directly relevant information and with impersonations exploiting ‘deepfakes’ such as voice cloning – making the attacks appear far more plausible. Companies such as CybSafe, which focus on the human factors of cyber-defence, will certainly help organisations be more prepared for these types of attack, but other defences are also needed – not least much more secure email and messaging platforms (e.g. Tutanota and Worldr), and easy-to-use endpoint protection (e.g. CyberSmart). However, as AI improves, threats from AI enabled spear-phishing will continue to grow.

What then can a target do to minimise impact once an exploit has been used or credentials compromised? Darktrace, a pioneer in the use of AI/ML for cyber-defence post initial exploit, now defends over 4,500 companies, having listed in April with a market cap of $3.6 billion. Darktrace exploits one of the oldest forms of Machine Learning (ML) – anomaly detection – which, like a biological immune system, flags activities within the target that it doesn’t categorise as ‘normal’. Being an unsupervised ML method, anomaly detection scales well and learns quickly.

Of course, it’s also possible that a hacker’s stealthy AI agent could infiltrate a system long enough to also learn the normal behaviours of its environment, and thence to hack the system undetected. But a future cyber-immune system could defend in this scenario using its own AI agents to obfuscate the normal behaviours of the system and render such infiltration impossible. On balance, AI will probably favour this ‘defence in depth’ strategy, when implemented as an AI-enhanced enterprise immune system, over favouring the attackers.

However, as the world becomes more connected, enterprises increasingly need to communicate with their suppliers, partners and customers – increasingly through their APIs, which have become a key target for hackers. Connected edge devices are also becoming an increasingly attractive target, as cybersecurity levels lag woefully behind the (already low) standards used within most enterprises. Yet, due to their locations, edge devices are typically more vulnerable to physical attack than enterprise servers and datacentres, and therefore should have much higher software security standards, not lower. Furthermore, their core operating system software kernels are often not kept updated throughout their lifetimes, and so become increasingly vulnerable to widely-known software exploits. (Foundries.io is a startup focused specifically on addressing these problems in a scalable way, in partnership with leading semiconductor companies.)

Whilst the threat of cyber-attacks within digital supply chains and ecosystems is rapidly increasing, a recent report found a clear majority of organisations have low confidence in their ability to defend against attacks targeting software build environments – i.e. of being able to prevent supply chain attacks.

AI agents impersonating an ecosystem partner or their digital assets, including edge devices, could become a much more serious threat in the future. It is much harder to understand ‘normal’ behaviour in such an ecosystem than behaviour within an enterprise, due to the lower volume of interactions and the challenge of sharing network data fairly and securely. With the growing computational power of edge devices, they could become an attractive reservoir for AI-enabled malware, able to learn stealthy behaviours and then to launch coordinated attacks on the network.

Two approaches that can mitigate these ecosystem threats are digital identities for all machines within an ecosystem (pioneered by firms such as Vanafi), and adoption of more distributed data architectures, such as the use of digital twins for every asset and organisational entity within an ecosystem (an approach being pioneered by startup Iotics).

recent report from EuroPol found that other prevalent AI-enabled attack methods include document scraping malware to make attacks more efficient, evasion of image recognition and voice biometrics, and the corruption of datasets through misinformation.

So where might AI-enabled hacking eventually lead?

According to an industry survey from Forrester, 77% of respondents expect weaponised AI to lead to an increase in the scale and speed of attacks, while 66% felt that it would lead to novel attacks that no human could envision.

It is certainly feasible to imagine a fully autonomous end-to-end RL agent, that uses a combination of hijacked and deepfake identities, selects and attacks targets fully autonomously and runs its own cryptocurrency wallets to collect ransoms and sell stolen data. Such a swarm of agents could rapidly evolve by comparing and exchanging strategies and exploits, becoming ever more stealthy and effective – perhaps for the first time gaining an insurmountable lead over cyber-defences. The real-world designers of such a system would only need to have a tenuous connection to these agents, just making the occasional crypto trade in order to transfer value – making it orders of magnitude more difficult to identify the real threat-actors, let alone prove their guilt. It’s a small step to imagine that such agents could eventually escape human control altogether – with the emergence of the first fully autonomous parasites on the world digital economy.

AI clearly has dramatic potential to vastly accelerate the volume and severity of cyber-attacks. A truly intelligent strategy would be for leading countries and organisations to get in front of this threat by eliminating the low hanging fruit of poor cybersecurity practices (especially for edge devices), and encourage adoption of open-source AI agents to test for software exploits to pre-empt at least the most common styles of cyber-threats.

Ultimately, it is society’s choice whether AI ends up hindering the hackers, or helping them. Let’s hope it doesn’t ever join them.