The metaverse – and elements of 6G – may be here sooner than many think.
While the Sunday before MWC is a busy time for people preparing exhibition stands, it was also an opportunity to steal a march on the ideas cropping up. As well as Nokia’s announcements, there was more behind the scenes that points in promising directions.
To 6G… in 2024?
While 6G hasn’t yet been defined properly in standards, let us refer back to the requirements set out in the NGMN’s recent white paper on 6G requirements and design considerations. That emphasised the need for elements of a software-based 6G to be introduced gradually, as they were ready and in demand, on top of existing software-based networks. This concept of 6G will be the opposite of a rip-and-replace, but a gradual drip-feed of software improvements that displace legacy elements over time: a “drip-and-displace” model, if you like. This will make the boundary of 5G, 5G-Advanced and 6G extremely porous.
To get to a cloud-native, software-based, automated and open 6G there needs to be an incentive to make the leap in investment from legacy single-vendor systems. For 5G sceptics, however, investments are yet to bear significant fruit. As a result it was refreshing to hear from Aglocell CEO Bruce Peterson, a serial telecoms entrepreneur. Aglocell has developed rApps – applications for the non-real-time-RIC in Open RAN – which offer just such an incentive. While the rApps aim to support 5G site configurations they also link back to the proprietary 4G equipment to drive up performance through that network, especially for the poorest-served. They have so far been able to demonstrate an average improvement of 11% in download capacity with a US carrier.
This is a striking example of 5G becoming porous back to 4G. While spending money on open 5G systems may not be paying its way yet, being able to boost 4G without any spend on infrastructure offers a useful method to offset the cost.
In another conversation, Deepsig provided food for thought. The AI developer has been making a name for itself by delivering native machine learning systems to improve PHY layer performance in vRAN. Their aspirations go far beyond that, though.
“We have actually been running a completely AI-native 5G system in our lab since 2021,” CEO Jim Shea pointed out.
This exemplifies an outlook in their product development which is strikingly different from what the telecoms community is used to. “As soon as we have something we think works, I encourage the team to put it in a live network and try it out. If we’re wrong we can take it down, figure out what didn’t work, and try it again.”
Such an approach is miles away from legacy R&D processes which seek to identify and solve problems in advance – not surprisingly, as the cost involved in creating combinations of hardware and software is exorbitant. “They couldn’t afford to get it wrong,” as Shea notes. However, there’s only so much that people can pre-empt. Rather than creating test networks that emulate real ones reasonably well, Deepsig can use live networks in all their varied glory to test and train its AI.
This approach is getting attention from test and measurement specialists, unsurprisingly. Shea is coy about exactly how the discussions are progressing, but the tone seems positive.
Critically, however, what we are seeing here is almost exactly what the NGMN is requesting – the ability to put new elements into their networks on an ad-hoc basis. The approach from Deepsig shows that alongside that ability is the flexibility to add, trial, and if necessary remove elements on the fly with minimal disruption or up-front cost.
The implications of this for network evolution are many, but – if the operators are able to adapt their purchasing and processes – it should give them the opportunity to become much more engaged with the development of tools and services for their networks, de-risking investments thanks to fast iterations of trials and improvements.
It also means that, to all intents and purposes, we might already be seeing the first drips of what will ultimately become 6G filtering into our networks.
At a Sunday preview of a variety of start-ups, it was clear that there are many technical elements falling into place for the metaverse. It’s also clear that the concept of people shutting themselves off from the physical world in preference for an entirely virtual one is not popular. While it may have appeal from time to time for gaming and escapism, much of the attention remains focused on improving interaction with, and understanding of, the world around us.
Dedicated readers may recall this interview with Capgemini Engineering which suggested that the first steps in the evolution of a metaverse may emerge from digital twins. AR platform Fectar seems to be the embodiment of such activity. CEO Eugene Kuipers was able to demonstrate just such a digital twin, showing how data readouts relate to activity in specific parts of a turbine. While the demo was for training purposes, the commercial version actually works as the digital twin of real-life turbines, enabling remote monitoring and interaction.
Strikingly, the system also allows for third parties to interact and collaborate around just such a digital twin; good for training, as mentioned above, but also for meeting or, in other contexts, shopping remotely.
Elsewhere Rodolphe Soulard was keen to showcase earbuds that are used to instantly translate between different languages. These use a connection to a cloud-based AI to translate incoming sounds from one language into the wearer’s native language; they currently have 34 languages in their database. While originally designed to assist on international business calls and to make tourists comfortable in hotels, a UK pilot is taking the innovation to help young asylum seekers who arrive in the country with no knowledge of English.
A third demo by Japanese firm Toraru gives some examples of how remote presence is developing. They have essentially gamified remote presence. Using a common games keypad a user can log in to a remote robot… or even access a human representative… and interact with them as their ‘eyes’ and representative in another country. By using a familiar console interface the company aims to overcome the limitations set by language, enabling people to engage with surroundings they cannot physically access.
Although the demo was designed to showcase a tourism scenario, it isn’t hard to imagine this becoming a lifeline for people who have difficulty with mobility for more mundane tasks such as shopping, enabling them to see what is actually in stock in a local shop and make decisions on what to buy in the same way they would if they were in-store.
While each of these companies’ demos are interesting in themselves and have some ingenious applications, it is no stretch of the imagination to see how services like this might one day interoperate to create a much richer set of ways to engage with the world around us, overlaying onto our everyday experiences.
There are, of course, many hurdles to be overcome in bringing these and other concepts together; most obviously the creation of an underlying platform or ecosystem that allows such services to become interoperable and some form of policies or regulation to prevent the more obvious potential abuses. Nevertheless, a growing quantity of technologies to build a rich, engaging metaverse are out there today.
Alex Lawrence is Managing Editor at 6GWorld. His mission is to bring together stakeholders from across industries, countries and disciplines to make sure that, as technology evolves in the coming decade, it’s meeting the changing demands of society, government and business.
He has been involved as a professional nosy person in the telecoms sphere since 2004, with short detours through industrial O&M and marketing.
If you’d like to talk to Alex about your ideas or projects he’d love to hear from you. @animalawrence or firstname.lastname@example.org.