MEC and Disaggregation To Reshape the 2030 Telecoms Landscape

January 26, 2022

Written by Alex Lawrence

“In ten years, we’ll see a complete reshuffling, not only of the service provider landscape, but also the vendor landscape. I just think it’s amazing – where we are today is just very exciting.”

David Stokes’ enthusiasm was clear. Ribbon’s Senior Manager for Solutions Marketing was talking to 6GWorld about the way they see the telecoms industry evolving, with changes in business models closely paired with changes in technology.

“We’re moving towards a world – and we’re nowhere near it yet – where we’re going to have to seamlessly be able to connect everywhere, every time,” Stokes observed, “and we want to do this in an environmentally friendly way.”

The combination of those two goals, Stokes believes, drives the demand for an evolution beyond 5G.

“5G is the starting point for connectivity everywhere, every time. With such massive goals you start to roll out and you see ‘Oh, actually there are gaps here. We need that really high-speed air interface, and we need this, we need that and the other.’ You’ll see the holes in 5G, and 6G will be the one that really gives that fully-connected every time, every place feature, in my opinion.”

MECanised Transport

Reducing the environmental impact of telecoms networks at the same time is a huge undertaking. “We need the ability to switch off bits of network completely which aren’t being used. It’s much like smart cars these days have got auto stop-start on the engines,” Stokes explained.

The NGMN already has a programme working on this approach, collaborating with vendors to make their RAN hardware capable of much more granular monitoring and control of energy usage. However, it will also demand third-party monitoring and control systems. As big a challenge, in Stokes’ eyes, will be leveling up overall network utilization towards 80% across 24 hours, not just at peak time. One part of the solution rests in the transport network.

“You need one homogenous transport network and then you should just bolt the access pipes onto the side of that, whether the access pipe is 5G, 6G, 4G, Wi-Fi or PON,” he noted, “if you have the same transport network, then pulling all of that traffic together and pushing it back to wherever it goes makes it much easier to achieve that seamless connectivity.”

The result would be something that combines smart energy management with current focuses on Multi-Access Edge Computing.

“What used to sit in the core of the network is now being distributed closer to the edge for reasons of latency and performance. So that transport network needs to be highly dynamic. It needs to be able to move where it’s needed, when it’s needed, and then turned off again when it’s not being used,” Stokes explained.

Disaggregation

While it is easy to find people willing to talk about the disaggregation of hardware and software in order to allow software’s evolution, Stokes brings a different perspective. The debate about how and when software should be containerised and composed using standardized APIs is still not entirely settled, but Stokes feels that something similar needs to happen to network hardware. The underlying reason for this is cost efficiency.

“The network that goes into the ground has to be upgradable for as long as possible. You want that box to last 15 or 20 years in reality, not five years. Also, when the hardware does evolve you don’t want to be locked into your current vendor.”

The approach would require a change in hardware designs every bit as drastic as those demanded in developing a finely-grained control over their energy usage.

“One fully disaggregated approach would have hardware coming from an original device manufacturer, which is state of the art and will be lower-cost because it’s driven by volume,” Stokes explained. “Then you’re going to the best silicon vendors to put in the silicon that rides on that.”

While the move towards software disaggregation and openness has been ongoing for years now, it is much earlier for hardware. The combination of business and technical changes that full hardware disaggregation will demand, in parallel with software disaggregation, are starting now but will take a long time to work through.

“I think it will take close to twenty years before we get that sort of fully disaggregated software and disaggregated hardware.”

Service Management and Orchestration

While hardware disaggregation will be an important issue for infrastructure management companies, many service providers are moving away from owning physical network assets. As a result, the business priorities and basis of competition are likely to change drastically in the coming decade.

“I think the infrastructure in the ground will be a lot less important going forward,” said Stokes. “The ability to manage the services on the network – to put in the right services, the capabilities in the network to adjust those services at the right time – will become more and more important. The network will become truly service driven, not network driven.”

The result for competition will be striking. Historically, service providers have competed on the coverage and quality of their networks. With greater proportions of networks being owned by third parties such as infrastructure providers, enterprises and more, “telcos” will need to find new areas to compete in – and while price competition has historically been a straightforward option, it also damages profitability. Instead, over time, Stokes suggests something very different.

“Orchestration will become the new differentiator. If you can orchestrate this disaggregated, MEC-using network, you can offer so much value – CAPEX reduction, service differentiation, etcetera. If you can orchestrate it even at a basic level, you’re light years past your competitors, both on the vendor side and on the service provider side.

“Then if you are able to innovate software onto that orchestration engine more rapidly, that becomes a huge differentiator.”

If that seems like an exciting prospect for being able to innovate and iterate new services and propositions, it should. But, Stokes noted, it is also “utterly terrifying.”

“Actually doing this is unbelievably complex. It’s incredibly complex, and to manage and control and bill for this? Oh my God!”

This is in part because orchestration of services needs to take a two-speed approach.

“You’ve got to have a management system which is static and reliable – you know the data is secure, you know the actual delivery over whichever network is reliable, etcetera,” Stokes explained. “You also need a bit which is able to evolve rapidly – to try stuff, play, integrate quickly.

“So you have to build a whole CI/CD [Continuous Integration/ Continuous Deployment] layer on top of that relatively static orchestration, and which allows you to rapidly evolve your interfaces to customer service management, new service systems, or whoever – to integrate with them and put them into the network rapidly.”

Partly this allows for the service providers to innovate at speed, but it is also driven by the hyperscalers’ service agility.

“Facebook and Google and Netflix aren’t going away, so you need to be able to integrate with them and offer your end customers all the things that these guys can do, plus your stuff – and that needs the ability to tack as quickly as these guys are doing. So you need that rapid-response, CI/CD-driven software built into the orchestration as well.”

Breaking Down Network Vendor Lock-Ins… Setting Up New Ones?

Orchestration is clearly an element where opportunities for service innovation, efficiency and collaboration exist, to a degree that might determine service providers’ commercial success or failure. Is there a risk, then, that the success of future telecoms service providers becomes tied up with the developers of those orchestrators? Not necessarily from a technology perspective, Stokes argued.

“The orchestration also needs to be highly containerised. You can then take the IP-MPLS management from Ribbon, the optical from Nokia and so on.”

This is something which has met with resistance from major providers so far, unsurprisingly.

“They used to pretend they’re containerised, but then use calls which meant you could never get the benefits of being containerised. I think it absolutely is going to have to go that way, so you can actually plug and play the best software parts.”

This technical perspective is not the only one that needs to be addressed, though. With greater complexity comes a greater suite of possible problems and more challenges in identifying them.

While vendor lock-in has been a very real problem, it also provided one contact to turn to in case of issues and one contract to manage when SLAs haven’t been met – the proverbial “one throat to choke” when things go wrong.

Stokes believes that this role may end up being taken by systems integrators. “There will be a big role for integrators, whether that be vendor integrators or independent integrators,” he observed.

However, the processes and roles will certainly be far from the relatively simple relationships with major NEPs in the past and will require a good deal of negotiation. We are already seeing a parallel playing out in the development of Open RAN in 5G, where analysts such as CCS Insight have suggested is unlikely to become mainstream until 6G, as Stokes pointed out.

“It is really a massive paradigm shift compared to the way telcos have been used to doing things, which is why in reality it might not be an issue with the 5G RAN itself. It’s the whole infrastructure around it. It’s going to take at least a decade to get to the point where we could fully realise what was meant to be 5G, for no fault with the 5G network itself.”

While there is a huge amount of work to be done, Stokes is optimistic.

“Every challenge is also an opportunity, and this is a huge challenge. This is by far the most exciting time to be in this industry. It’s not just people playing with bits and bytes in the network, it really has the potential to be life changing and society changing.”

Recent Posts

Guest Post: Navigating the IoT security landscape

Guest Post: Navigating the IoT security landscape

By Iain Davidson, senior product manager, Wireless Logic According to IDC, spend on the internet of things (IoT) could reach almost $345 billion by 2027. The fastest adoption will be in applications such as irrigation and fleet management, with prominent use cases in...

Key Value Indicators – Making Good Business

Key Value Indicators – Making Good Business

One of the most original and most overlooked features of 6G is the involvement of Key Value Indicators [KVIs] in its development. However, KVIs may hold the key to revamping the fortunes of the telecoms industry. Key Value Indicators were introduced as a concept into...

Pin It on Pinterest

Share This