The USA’s Next G Alliance released a report by its Societal and Economic Needs [SEN] working group last week titled “Beyond Speed: Promoting Social and Economic Opportunities through 6G and Beyond”.
It makes for interesting reading and, overall, offers a useful basic structure for other countries’ own roadmaps and thought processes. That said, it’s important to bear in mind that some of the elements are particularly North America-Centric.
The paper is 51 pages long, outlining “a base inventory of social and economic issues, grouped into common outcomes: Digital Equity, Trust, Sustainability, Economic Growth, and Quality of Life.” The report addresses each in turn, exploring the key objectives, concerns and challenges, and possible ways forward.
Many of these are only partially germane to the industry players per se, but the report highlights where there are overlaps between the technology areas the telcos community is used to and broader policy/regulatory concerns. It also highlights promising areas to follow in each area, though it avoids making too many specific demands for what should or must be in-scope for the next generation of telecoms.
This article could go in many directions, but the SEN’s report stands as quite a good remedy to some of the issues skimmed over in the ITU’s Technology Trends document, in particularly as regards the convoluted question of trust.
On this topic, the report notes “A wide range of applications will leverage Next G systems, such as telemedicine, autonomous vehicles, and smart cities, requiring a high level of trust to be adopted and used effectively. Additionally, Next G systems will likely be more complex and interconnected than previous generations of networks, increasing the potential for security and privacy breaches.”
Defining a Trustworthy 6G
There is no doubt that trust is important, both for service creation and adoption. This topic area is especially problematic, as the question of what trust actually is can end up leading to some quite abstract philosophical wrangling. To the Next G Alliance,
“Trust at the highest level is a measure of confidence that an entity will behave in an expected manner, despite the lack of ability to monitor or control the environment in which it operates.”
This is quite a neat summation, in many ways. The Next G Alliance authors define trust through three lenses: Security, Privacy and Resilience. “
By grounding its security, privacy, and transparency promises in technology, Next G will be viewed as trustworthy, not merely trusted out of necessity,” say the authors.
However, a ‘trustworthy 6G’ needs to be set in context.
During the occupation of Afghanistan, there are anecdotes of Taliban fighters destroying cell towers on the basis that they were ‘American’ devices, and therefore untrustworthy… and then contacting Roshan to complain about a lack of coverage. With the advent of 5G we have seen numerous conspiracy theories about the terrible things it was doing, from eliminating bees to spreading COVID, leading to complaints against deploying 5G in several countries and even damage to 5G cell sites.
With 6G we will indeed need to consider trust in the way that the Next G Alliance specifies, through the lenses of privacy, security and resilience. This is a huge task in itself, which we will dig into further below.
However, we will also need to build a story that can create confidence that, whatever 6G is technically, it’s a tool that is being leveraged for the benefit of society. It will mean actively listening to people who feel threatened and lost by what has been happening. It will mean responding not just with patronising explanations of science, but with compassion for the underlying concerns and mindsets and providing narratives that can engage people.
For example, providing transparency on the aims for 6G and inviting public feedback on its use and abuse from early stages would help governments and industry bodies build trust… provided that the feedback is acted upon.
The Next G Alliance’s white paper shows how future communications might become worthy of trust, but we do also need to think about methods which will encourage people to extend that trust in the first place.
Nobody can accuse the report’s authors of a lack of ambition, and this forms a great response to concerns raised about pursuing a new generation with 2G security architecture underlying it. Many of the solutions highlighted are right at the cutting-edge today.
For example, the differences between today’s centralised networks and a projected ‘network of networks’ is given serious consideration in the paper.
“Next G connectivity is a multi-domain service that relies on resources distributed across multiple trust domains,” they note.
“Consequently, Next G systems may need to be considered as a Zero Trust Architecture (ZTA)”
This cybersecurity strategy is sensible, insofar as it does not assume any given user or piece of a network is secure just on trust. It means that data sessions and users will need to authenticate across the network – a good way to reduce the attack surface while crossing network boundaries, but something which may risk increasing latency. In an environment like this, moving compute power closer to the end user is going to be critical… but then, that’s an integral part of a disaggregated network model.
The other side of this disaggregated strategy is identity. As the authors point out, a ZTA requires users to be “authenticated and authorized to access these networks with truly portable identities that can be verified and authenticated across these open networks.”
The report proposes Self-Sovereign Identity as a solution. This is “an emerging decentralized identity solution that enables users to keep their digital identities with personally identifiable data secure in storage of their choice and thus determine their distribution and processing.”
It’s a fascinating step, and a brave one given that we don’t currently have policy or regulatory structures in place for Self-Sovereign Identity. However, it is difficult to see other, more established technologies being effective in a decentralised environment either. If the telecoms industry is able to get behind Self-Sovereign Identity that may well give the incentive for more serious and rapid development not just of the technology but also the associated frameworks.
Related to this, the paper also explores the need to preserve privacy in machine learning systems. The tension between privacy and training algorithms on huge amounts of data – whether personally identifiable or usable in reconstructing an identity – has been palpable. Certainly the ‘oil rush’ for personal data in the 2010s seems problematic now. The Next G Alliance is looking at building in privacy-preserving techniques from the start, which (if people become aware of this) could be a big boost to personal trust in the telecoms providers.
The authors highlight Secure Multiparty Computation (SMC) as a possible method to enable privacy-preserving machine learning. They note that “SMC enables multiple parties to jointly compute a function over their private data without revealing their inputs to each other. This method could allow for the development of privacy-preserving ML models that can be trained on sensitive data without exposing that data to the model’s developers or other third parties.”
The report also highlights other areas to explore, such as the use of data trusts – which, like financial trusts, are empowered by the user to deploy their data in certain specified ways. The trust, a non-profit, is designed to act in the interests of the users.
“Data trusts have attracted attention from policymakers worldwide as a tool for allowing access to data while protecting citizens’ rights,” the report notes.
“Research in institutions such as the Open Data Institute and Global Partnership on Artificial Intelligence (GPAI) is ongoing, exploring how to operationalize a data trust, including the legal and public policy considerations.”
The radio side is equally ambitious. “Next G will represent an opportunity to deploy freshly matured implementations of long-contemplated security techniques,” the authors note.
“Such techniques range from transmitter fingerprinting (for authenticity) to information-theoretic security (that leverages unforgeable channel characteristics for private communications) to increasingly refined directional techniques that minimize intercept opportunity. In addition, these techniques will underpin the higher-level cryptographic techniques for application security.”
The authors also suggest the emergence of a security control plane to manage security information and deploy functionality flexibly.
All this is exciting reading. Let’s be clear, though, that there have been opportunities to deploy upgraded security techniques before which have been missed, whether due to inertia or simply a lack of clarity about the business case for it. Businesses have more at risk – and, crucially, more resources to seek reparations – from poor security stances. If the telecoms world is going to pivot effectively to serve businesses across the globe they will need to improve their security game.
Moreover, news headlines such as self-driving cars stopping in areas of high network traffic are highlighting how, while people are fairly resilient and fault-tolerant, machines tend to be much less so. While the white paper is able to address privacy and security concerns in a far-reaching manner, resilience is a different matter. By its nature, resilience involves redundancy in case of failure, which is antithetical to efficiency. In the case of the vehicles above, perhaps the solution is to enable traffic prioritisation in some situations rather than insisting on net neutrality.
However, there is a difference between service resilience in areas with functioning (if congested) networks and network resilience itself. Network operators feeling the pinch are unlikely to double up on networks and instead share network resources between multiple providers.
However, a network of networks ‘properly managed’ may go a long way towards improved resilience. As things stand, phones are able to switch between Wi-Fi and cellular signals depending on what is available and accessible. If you add to this by roaming onto private networks, satellite and device-to-device mesh networking regardless of the owner, then this structure itself would help to create much more resilience just in itself. However, it would be impossible to give any guarantees based on what networks may or may not be available in any given situation.
As a result, network resilience may become the domain of regulators, in parallel with competition, to ensure multiple types of network are available in any given location.
Overall, the Next G Alliance’s white paper covers a huge amount of ground. Many of the societal and economic objectives the Alliance has outlined will resonate with other countries, but not all of them. However, it does include a significant amount of fresh thinking about some of the Next G ‘elephants in the room’ and puts forward bold proposals for possible ways to address them. While many of the proposals may seem quite radical, especially when it comes to redefining the security architecture for a new generation of telecoms, developing these new techniques is liable to be less costly in the long term than the alternative.
Alex Lawrence is Managing Editor at 6GWorld. His mission is to bring together stakeholders from across industries, countries and disciplines to make sure that, as technology evolves in the coming decade, it’s meeting the changing demands of society, government and business.
He has been involved as a professional nosy person in the telecoms sphere since 2004, with short detours through industrial O&M and marketing.
If you’d like to talk to Alex about your ideas or projects he’d love to hear from you. @animalawrence or email@example.com.