Personalise your experience

Get the latest insights relevant to your sector.

Blog · 23 Feb 2021

Why AI is here to stay in cyber defence

From visualising vast volumes of data to defending against cybercriminals, AI is critical in cybersecurity — but where does the human fit in?

Robert Hercock
Chief Research Scientist

Artificial Intelligence (AI) is now an essential part of cybersecurity. It’s here to stay. Cybersecurity is fundamentally a big data problem, and we need AI to process the vast amounts of data involved.

For context, if we wanted to switch back to only using human brain power to run cyber defence, we’d now need most of the working population to be analysts to cope with the tens of thousands of malware incidents that happen every day. What’s still up for debate though is how much automation it’s wise to introduce.

Creating the ideal human-AI partnership

Successful cybersecurity services involve balancing the strengths of human analytical skills with AI’s ability to process data rapidly to identify the incidents humans should look at. 

If you take the human out of the loop and fully automate cyber defence applications, you’re left with a system that doesn’t take context into account. To AI, closing a firewall and cutting off part of the business would seem like a logical ‘fix’ to an attack, but a human would realise that this would have serious, more widespread consequences, and would choose a different type of mitigation.

Relying too heavily on AI in cybersecurity brings serious risks. For example, it’s easy for bias to be introduced into the data AI uses, something that’s becoming more apparent in facial recognition technology, causing companies to explore other identification and verification options. All cybersecurity services that use AI must be alert for fails and misclassifications: human checks and balances are essential. And it’s important to remember that cybercriminals will be well aware of AI’s weaknesses, too. Advanced attackers will try to over-ride or overload the AI system, potentially tricking it into continually attacking itself.

Data visualisation is the key to human interpretation

Cyber defence involves large amounts of complex data with many conflicting dimensions that even the most sophisticated spreadsheet technique can’t manage. It’s much easier for human analysts to understand all the ramifications if the data is translated into visual images.

Our data visualisation tools are designed to be so easy to use that after five minutes of training, it’s possible to sit down and play with the data in a meaningful way. Our current tools scan real time and historical data to spot patterns over long timeframes that human analysts would struggle to cope with, revealing subtle attacks or anomalies that cybercriminals are trying to hide. They process data twenty times faster than older, more manual techniques, so that investigations that used to take a week now only take a couple of hours.

Defending against evolving, malicious uses of AI

Cyber defence must keep on innovating to stay ahead of constantly evolving threats. As 5G brings faster networks and the Internet of Things expands the range of items that can be weaponised by bad cyber actors, the potential for malicious uses of AI increases. Every AI advance has a dark side.

Techniques designed for legitimate purposes can be hijacked, such as the ability to create fake images or video. On the positive side, these techniques are used in a lot of legitimate ways by businesses, such as simulating a complete fashion show using only virtual models or for special effects in the movies. But on the negative side, now a home computer can do what you used to need a whole Hollywood studio to achieve, Generative Adversarial Networks (GAN) are creating convincing deep fakes. Suddenly, nothing is certain. The video clips of politicians we see on social media might not be real, and yet we will probably believe they are and, potentially, this level of manipulation could give bad actors the power to flip an election. Identifying and flagging up manipulated images so that they aren’t taken as real is going to be a long-term societal problem.

Cyber defence in the age of coronavirus

Cyberattacks have increased in volume and frequency during the coronavirus pandemic most likely driven by two factors. One, there’s more opportunity with people constantly online doing their best to work remotely. And two, the cybercriminals will also be under pressure to earn money, particularly if the pandemic means their other sources of income have dried up.

Overall, this has increased pressure on cyber defence systems and accelerated the need for automation in defence to help keep businesses protected. Most of our cybersecurity teams are working remotely, and it’s an ideal test case for our prototype Virtual Security Operations Centre (VSOC), the first virtual reality fully immersive experience for data analysis.

With the VSOC, our analysts can collaborate while accessing our security tools in a way that goes beyond what’s possible on traditional monitor displays. They can work together in the same environment, using the same apps in real time, even though they’re in different physical locations. We’re testing the future of AI in cyber defence, today.

Using AI in cybersecurity has huge potential to strengthen your organisation’s defences. If you’d like to talk through the right AI-human cyber defence balance for your organisation, please get in touch with your account manager. We’re here to help.

Discover how your business can exploit technology and innovation both now and in the future by downloading our ‘Winning the innovation race’ brochure

Contact