Good Cybersecurity Complements AI with a Human Workforce

January 27, 2021

Written by Ryan Szporer
picture of a chess game, all black pieces toppled

Artificial Intelligence (AI) continues to ramp up its capabilities and the spaces in which it is deployed. Cybersecurity is one example, with two out of three organisations having planned to deploy AI in such a capacity by the end of 2020 according to a report published by consultancy Capgemini. There are undeniable concerns, though. 

“One of the first manifestations of applied AI, and a pretty quick win, has been in cybersecurity, for instance in spotting phishing attacks. But it’s important to remember that AI also opens up a new potential attack surface. Does [AI] create security challenges? Of course. It’s an immature discipline with flaws that have yet to be addressed,” said Matt Hatton, Founding Partner at research firm Transforma Insights.  

Phishing and Other Cybersecurity Concerns 

“Generally, most companies’ biggest security threat is the people who work there, revealing passwords to people through phishing emails and other social engineering attacks. That tends to be a bigger problem,” Hatton added. “AI will do what it’s told. That’s the big advantage. The big disadvantage? It will do what it’s told and so you end up with all these mutant algorithms, which are doing exactly what they wanted them to do.” 

When developed to do so, AI can indeed identify phishing threats, which account for approximately 70% of cyber attacks according to Iron Defence. AI can track active phishing sources and recognise harmful websites an employee may visit. If a malicious link is clicked, an AI or Machine Learning (ML)-based cybersecurity system such as Darktrace or Cylance’s can classify it as abnormal behaviour, enabling a human response and limiting damage done.  

However, lack of proper programming stands in the way of such a capability. The need for human staff is still evident. For example, cybersecurity risks such as data poisoning (when training models use inaccurate data) and adversarial machine learning (manipulating inputs) can be countered by the (human) development of increasingly robust AI training models.  

Humans Still Have a Role to Play 

So, from a human perspective, it goes far beyond still-critical tasks like policy creation and backup management, which are typically handled by staff members.  

According to Abeer Raza, CMO of digital-solutions provider TekRevol, the overview and analysis of data sets by experts still has a huge part to play in AI. Automating those processes could lead to data being exploited. In effect, AI is used to solve problems, but humans still must point it in the direction of the problems to solve. To illustrate the point and a continuing trend in demand for skilled staff, Cybersecurity Ventures projects 3.5 million unfilled cybersecurity jobs by 2021.  

Appearing at the virtual 2020 AI & Big Data Expo, James Bell, Dow Jones’ Head of AI and Machine Learning (Customer Solution Engineering), spoke of six principles that make up AI ethics: augmentation, fairness, reliability, privacy, transparency, and accountability. Bell described augmentation as one of two forms of automation. Whereas “re-engineering” replaces the majority of the workforce to make economic progress, augmentation acknowledges humans as an asset. 

“[Through augmentation], you enable [humans] to work faster and better by building a system which complements human cognitive capability,” he said. “This is sometimes called Human-Machine Teaming and it’s found in AI design where the complex nature of the workflow or task benefit from human context is paired with machine learning. Used thoughtfully, AI can augment our intelligence, helping us to chart a wiser and better path forward.” 

A test described in a VentureBeat piece by Seth Colaner offers a striking example: Cybersecurity firm LogicHub CEO Kumar Saurabh described how 40 different automated systems failed to detect a threat inside a small amount of data. In contrast, one in four human analysts were able to identify the issue, albeit only after an impractical one to three hours. This highlights the need for the two sides to work together, at least until AI becomes as creative at solving problems as humans. In the piece, Saurabh opined that it may take decades to reach that point. 

Bigger Networks, Bigger Problems 

As 5G, the latest generation of cellular technology, rolls out, advances in AI and cybersecurity are constantly made. However, bad actors have access to technology too, resulting in a tug of war. The attack surface is getting larger too, as smart buildings and even cities become increasingly connected. As a result, more and more devices act as potential access points for cyber criminals

As a Software as a Service (SaaS) provider with a cloud-based Heating, Ventilation, and Air Condition (HVAC) solution, BrainBox AI Channel Sales Director Andy McMahon sees the rollout of 5G and eventually 6G as an opportunity to get smarter, ingest more data, and learn more about a given building before making the required adjustments. McMahon nevertheless sees cybersecurity as the top technological hurdle moving forward, specifically regarding the optimisation of AI. 

“IT, IT groups, IT personnel: Whether it’s a building or industrial factory, IT plays the largest role in making sure not only that their personal networks, their company networks, aren’t exposed, can’t be hacked, etc., but also that their underlying automation control networks are also protected, and there’s a healthy level of paranoia there, which is important, between firewalls and cybersecurity solutions,” he said, optimistic with regard to the advancements being made in the area. 

Speaking at market-research-firm CCS Insight’s annual Predictions event, Mitra Azizirad said she does see investment ratcheting up in the coming years, not just in cybersecurity as an application of AI and ML, but the cybersecurity of AI itself. Azizirad, the Corporate Vice President of Microsoft AI and Innovation Marketing, added that there’s a holistic, end-to-end approach to the adoption of AI and ML by enterprises, in which the role of employees cannot be understated. 

“[…] From design, to development, to training, to deployment, and then that really very important step of monitoring, managing every step in that lifecycle… Ensuring governance and compliance for the responsible usage of AI and transparency in checking those models for bias and other kinds of issues is vital,” she said. 

“Focusing only on the technology and overlooking how the culture needs to adapt could be very detrimental, because we see time and time again that there’s this higher correlation of success with AI in organisations that invest in the skill building and the coaching alongside the technological investments.”  

Feature image courtesy of Jeshoots.com (via Pexels).

Drop us a line

Interested in contributing to 6GWorld.com? Let's connect to learn more about your ideas.

Upcoming – 6GSymposium Spring, May 4-6

Drop us a line

Interested in contributing to 6GWorld.com? Let's connect to learn more about your ideas.

Recent Posts

Security in Smart Cities – NanoLock Security Q&A

Security in Smart Cities – NanoLock Security Q&A

In this Q&A conducted by 6GWorldTM, NanoLock Security CTO Nitzan Daube discusses the increasing number of smart cities and how one might address accompanying cyber threats. Is there a way to balance the needs for frequent, but cost-effective security updates in...

DAEMON Eyes Greater Network Intelligence in 6G Systems

DAEMON Eyes Greater Network Intelligence in 6G Systems

The first thing that stands out about the Network Intelligence (NI) for aDAptive and sElf-learning MObile Networks project is its DAEMON acronym. DAEMON’s in the Details One look at the project’s logo on its 5G Infrastructure Public Private Partnership (5G-PPP) page...

Pin It on Pinterest

Share This