“Pretty much anything can be connected with a little bit of willpower, but should it be? And what are the kind of things that we need to think about when we are connecting things in our different spaces?”
This was the observation from Carmen Fontana, IEEE Member and VP of Operations at Augment Therapy. She was discussing some of the observations made about the direction of future connectivity in a recent panel discussion. It’s one example of a growing mindset.
The ITU, ATIS Next G Alliance, the European Commission and others have all placed the idea not only of secure systems, but trustworthy systems, at the heart of their plans for the next generation of telecoms. For example, this comes from ATIS Next G Alliance’s 6G Roadmap document:
“Users and our societies expect a network which they can depend on and trust under all circumstances. This largely means a system that is reliable, resilient, and secures communication and information… Where relevant, data under control of the 6G network must moreover be used ethically, especially when processed by AI modules to serve applicable objectives.”
While security by design is a big ask in itself, “trust” may be a bigger one.
A Broad Question
Firstly, there is the definition of what “trust” is:
- In conversations about authentication and identity, a secure ID can be considered a “root of trust” to ensure that the device authenticating to a network is what it claims to be. However, trust in this sense has been a feature of telecoms for decades.
- In the quote above, we see trust for end-users involves data “being used ethically,” which is a common-sense understanding of a process that might build trust. It is also a fairly flexible term depending on whose ethics are being applied.
- A 2019 GSMA paper argued that transparency and simplicity of access were crucial for the ability to build trust from consumers – in this case to become a trusted data hub as a business.
This question about trust in future telecoms ties in with another recent piece of news – the call for a moratorium on general AI development published as an open letter by the Future of Life Institute on 22 March. To quote from the letter:
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening.”
The letter goes on to request the establishment of “a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts… This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
The letter, signed not just by academics but by the likes of Steve Wozniak, Elon Musk, the founders of Ripple, Pinterest, Skype and a wide variety of AI specialist companies such as Deepmind, is significant for a number of reasons.
The Decline Of Trust
“One of the examples I always bring up is social media,” Fontana observed.
“When that hit the scene, it was amazing – rainbows and kittens – and we went full force as a society, only looking at the upsides without taking a pause to look at what could potentially go wrong,” she said.
“Now that we have the benefit of hindsight, we know mental health problems, disinformation and lots of other not-so-great things are tied up with social media. And so my hope is, as we do more things that have outsized impact, that we also take an outsized pause to figure out what potentially could go wrong.”
Here again we see a leader in technology – in Fontana’s case, using augmented reality to make physical therapy for people with complex needs more enjoyable, better managed and more likely to be completed – wanting to stand back and reflect. This is a far cry from the “move fast and break things” experience of the 2000s and 2010s.
Arguably the ethos that moves ahead of regulation and puts faith in technology or “the market” has led to a lack of trust by the general public.
The collection and use of customer data, third-party tracking cookies, the awareness of an “attention economy” and dopa-mining algorithms encouraging anger and anxiety, the spread of disinformation in a “post-truth” environment, and much more have all undermined public faith in the ability or willingness of tech companies to act in the interests of the wider public.
As a result, people have exhibited a wide variety of unexpected behaviours attributing other harms to technology, such as those who believe that 5G caused COVID-19 or is destroying the bee population, or that Bill Gates is hiding microchips in vaccines. This suspicion of technology and the firms deploying it, even as we rely on it, needs addressing.
In essence, it seems, the public needs to be able to trust that the tools they use are just that – that they can reliably do what they are claimed to do, but are not doing anything beyond that “behind the scenes.”
Building Back Better
So what do trustworthy systems look like? And to what extent can the evolution of telecoms play a meaningful role in building public trust?
Telecoms has always been the more heavily-regulated industry. Although there have been high-profile data breaches and complaints about pricing or practices, the perception of telecom providers as utilities may be helpful in this case. For all that we might grumble about energy or water companies being greedy or incompetent, there is rarely a situation where they’re seen as colluding to bring down society.
The real challenge, as Fontana pointed out, lies in the second-order effects of services or systems.
“I’d love to figure out how we can marry some of the good parts of AI to connectivity to be more forward-looking versus just reporting back data,” she mused.
“If a whole community has smart devices and you anonymise that data across the community you might see that, for instance, there’s an average increase in temperature across the community. Maybe there’s a flu outbreak and we should proactively position flu immunisations in the community or provide additional healthcare providers because we know something’s starting. But there are lots of implications.”
Some of those include capabilities to de-anonymise data, or being able to functionally do so (I might not know your name, but I know where you go, what you do, who with and the state of your health). What might be done with that beyond serving adverts?
Fontana gives another example. “Smartwatches are getting smarter and smarter. It’s not out of the question that someday they’ll be able to pre-diagnose conditions – which is great from a personal healthcare standpoint, but also worrying that it may disqualify you from getting health insurance or life insurance or employment, which should be illegal but it’s at best a grey area.”
These second-order effects might not be easy for people to predict if they are, for example, application developers rather than insurance experts. However, the impact that they can have on a society is very real.
There are at least two ways to tackle such challenges: one is by building methods to prevent unwanted second-order effects, in the example above perhaps by obscuring the data very strongly. However, many issues would need a regulatory approach to manage or prevent instead of a technical one.
In an environment of regulation after the fact, the penetration of AI and digital services into our lives offers a growing scope for harms to take hold much more rapidly than regulation can solve. The Freedom of Life Institute’s call for a moratorium to consider what should be put in place before we proceed along potentially problematic routes seems very relevant in this context.
Trust in Telecoms
While interesting, how does this relate back to the telecoms industry? Increasingly, telecoms networks are relying on machine learning and relatively simple AI to meet specific objectives – to optimise a radio interface, for example, or orchestrate a service.
However, the complexity of networks is increasing rapidly, and on top of these simple systems, researchers are exploring how to optimise services end-to-end (which may involve degrading one network element’s performance for the benefit of the overall service).
Meanwhile, a push towards more distributed and low-latency services is encouraging the use of federated learning. In all of these, there may be scope for choices of training data or algorithmic biases to deliver unintended and problematic outcomes, simply because the designers cannot anticipate the multiplicity of situations in which the systems find themselves.
Is it impossible that a system might independently prioritise areas of higher wealth as a side-effect of other calculations, for example, emphasising digital divides?
More widely, telecom providers are looking for a greater role in digital services, such as the GSMA’s latest push to offer network APIs with Open Gateway. There may be an opportunity for telecoms players globally to position themselves as consumer champions by, for example, offering user verification and authorisation while stripping out identifiers. Or using high-frequency combined communication and sensing to ensure users’ safety without knowing who they are.
Oddly, though, there are some relatively simple ways to build trust with end-users. For example, to show active engagement with their problems and deliver solutions specifically to those.
“There is still a large swathe of the population that don’t have reliable internet access,” Fontana pointed out.
“Those populations are also the ones least likely to have reliable transportation or they don’t have a hospital nearby or some of the other obstacles to quality care. So by enabling connectivity to them, we can do treatment in the home, we can do remote therapeutic monitoring, we can do things with wearables. There are a lot of health care interventions that can be unlocked by just having reliable access.”
Government visions of 6G societies have so far all included the idea of universal access to services, something which has always been both a technology and a cost problem. Technology solutions are coming closer to being realised as the industry works to integrate non-terrestrial networks, to operate in lower spectrum bands and to use mid-bands more effectively.
If commercial solutions can appear in parallel, then that puts the telecoms providers in a great position to work with services such as healthcare to make tangible differences to many lives, building trust in the process.
However, we should go back to the idea of what we want networks, and telecoms, to be or do as the technology evolves. If, as an industry, we want to be the people connecting everything, then we will build trust as we build reliable connectivity. Equally for other aims, we build trust by communicating in line with our actions. This is not just a PR activity, but making sure that external comms reflect actual commercial and technology strategy on the ground.
While it will take a lot to rebuild faith in ‘tech’ overall, building trust in those bringing about 6G is possible. For the rest, perhaps time to reflect about what we’re doing and why is not a bad thing.
“Humans are really great at finding ways to do bad things,” Fontana summarised. “But at least we’re starting to think about it. Our radar is up, we’re taking consideration and trying to get at least some of the potential negative use cases contained.”
Alex Lawrence is Managing Editor at 6GWorld. His mission is to bring together stakeholders from across industries, countries and disciplines to make sure that, as technology evolves in the coming decade, it’s meeting the changing demands of society, government and business.
He has been involved as a professional nosy person in the telecoms sphere since 2004, with short detours through industrial O&M and marketing.
If you’d like to talk to Alex about your ideas or projects he’d love to hear from you. @animalawrence or email@example.com.