The Common Ground of AI Ethics and Security Concerns

February 22, 2021

Written by Ryan Szporer
AI Robotics Hand
CATEGORY: Exclusives

Artificial intelligence (AI) has many use cases, including cybersecurity. However, securing AI systems themselves has to be a priority. That’s the stance of the Securing AI Industry Specification Group (SAI ISG) at the European Telecommunications Standards Institute (ETSI), which released an official problem statement on the issue earlier this year.

An Aim to Secure AI

Alex Leadbeater, the Chair of SAI, told 6GWorldTM it’s critical to look beyond use cases, at least to start. Being the first standardisation initiative looking to secure AI, SAI faces many challenges, including promoting widespread adoption. Leadbeater sees the report as a step towards this and also finding common ground.

“How would you ensure the citizen could actually use an AI application and say, ‘You know what? I can prove and I can understand it’s unbiased. I can get some assurance it’s secure. I can get some assurance that when an AI makes a decision I stand somewhere being able to assert or at least be given some assurance that it’s working as intended,” said Leadbeater, who also acts as Head of Global Obligations Futures and Standards at BT.

Speaking to 6GWorld separately, Kathleen Walch, Principal Analyst at AI research firm Cognilytica, said bias can be tricky. She said it can be hard to spot and gave an example in which people born in even years had gotten loans at a higher rate than their odd-year counterparts.

“The system made a correlation that people born in even years would have a better chance at getting a loan and that really had nothing to do with anybody’s ability to get a loan. It was just what was in the data that was fed to the system. So you have to be careful about things like that, where no gender or race or geographic location came into play,” she said. “There’s been lots of discussion about what data is trained in these systems and how often do you need to retrain it. Bias is always going to be there. So how do you try and mitigate it?”

Navigating the AI Threat Landscape

It’s an ambitious goal, but one that’s necessary. Leadbeater said part of the problem is how AI has such a complex threat landscape for security precautions to navigate, compared to traditional IT systems.

“The actual bigger AI itself and all of its layers, its data sources, its models, its algorithms, its feedback loops, its actual fundamental building blocks, all have subtleties and nuances and additional complexities and layering, which security will be able to consider that don’t exist [in traditional IT systems],” he said. “The other challenge is there’s no what you might term, ‘Walks like a duck, quacks like a duck, it’s a duck‘ equivalent in here.”

As a result, Leadbeater said one cannot simply replicate the same precautions taken with older technologies, implying it’s almost like starting from scratch. It doesn’t help that things change very rapidly in the field.

“Things come into fashion and then go out of fashion… AI is an evolving area. Therefore, there is a tendency for AI to just creep into things relatively unintended,” he said.

Leadbeater‘s goal is to use the release of the report as a launch pad for a high-level view of the situation over the next 1.5-2 years. At that point, he said the group plans to start developing technical mitigations and “pick out some threads.” These may be ethics and security challenges; for example, AI’s “obscurity.”

Many people, despite perhaps having reservations about AI systems, currently use them without knowing it, with the SAI report making the distinction between automated vehicles and AI in lifestyle applications. Walch said sometimes it’s a matter of being willing to accept some risk for the sake of convenience when the stakes are lower.

She compared vehicles, where an accident could be disastrous, to a voice assistant at home. With the latter, if you were to ask for a recipe, it might not taste good. If you were to ask about the weather outside, you could always check for yourself.  In other cases, like with vehicles, there just isn’t that fallback if humans are taken out of the equation.

“When we talk about artificial intelligence, we talk about it as the seven patterns of AI, because two people can be talking about AI and not talking about the same thing. I could be talking about chatbots. You could be talking about [automated] vehicles. Someone else could be talking about predictive maintenance. We’re all talking about AI, but it’s not the same application, and I think that when you use more of an augmented intelligence approach, which is keeping the human in the loop, not replacing the human. Then adoption becomes easier and the technology seems less threatening and the ramifications of it going wrong aren’t as high,” she said.

AI Ethics vs. Security

Walch was interviewed about AI ethics. Asked if she foresaw greater regulation to prevent things like the spread of misinformation through social-media bots, she said she sees a crackdown coming in some form, but it’s going to take time.

“I think that with any new and transformative technology we always say that you need to see how it plays out before you regulate it, because people who want to do bad things will just work around whatever regulations there are if you regulate it too early. Plus you need to see how people are using it,” she said.

In the announcement of the SAI report, Leadbeater had been quoted as saying that discussions about AI security standards tend to take a backseat to the subject of AI ethics. However, he did discuss the undeniable link between the two, with the need for privacy being front and centre today.

“That seems to be the way the two have ended up gelled together of late… due to some of the early unfortunate malfunctions of AI systems,” he said. “You think of some of the chatbots and things that turned accidentally racist or similar, because they’ve not got the security right or they’ve not got the feedback loops.”

Leadbeater also referenced the tug of war between those responsible for securing AI and malicious actors as a bit of a feedback loop itself. Improving security will only drive attackers to up their game. Ironically, ethics can have a part in originating the vicious cycle.

Image courtesy of ETSI ISG Securing Artificial Intelligence

“Some of the blocking techniques are reasonably public because the browser vendors or others want transparency and ethics and would therefore […] provide some interaction or indication to the user as to how they work, AI or otherwise. The attackers obviously don’t have that particular moral requirement to need to do the same,” he said.

Ultimately, Leadbeater called security a hidden, lower layer that helps ensure ethics are maintained. The ethics aspect is something people can easily lock onto because it’s easier to understand, especially when things go wrong and you don’t get the expected results. However, security is just as important.

“In order to ensure an AI acts ethically and it cannot be manipulated, ultimately you have to have some form of assured trust underneath and therefore you come back to the security problem.”

Feature image courtesy of Possessed Photography (via Unsplash).

Recent Posts

Guest Post: Navigating the IoT security landscape

Guest Post: Navigating the IoT security landscape

By Iain Davidson, senior product manager, Wireless Logic According to IDC, spend on the internet of things (IoT) could reach almost $345 billion by 2027. The fastest adoption will be in applications such as irrigation and fleet management, with prominent use cases in...

Key Value Indicators – Making Good Business

Key Value Indicators – Making Good Business

One of the most original and most overlooked features of 6G is the involvement of Key Value Indicators [KVIs] in its development. However, KVIs may hold the key to revamping the fortunes of the telecoms industry. Key Value Indicators were introduced as a concept into...

Pin It on Pinterest

Share This