A “Physical Layer Machine Learning Applications Want To See”?

November 14, 2022

Written by Alex Lawrence

Rob Calderbank of Duke University is one of those unusual academics to have spent a long time in commercial telecoms environments. Today, while he is leading Duke’s interdisciplinary work on big data, he has also recently published work on something he calls pulsones, which may contribute to a more efficient future air interface.

6GWorld caught up with him to find out more about this, and why a big data professor is worrying about radio.

“I’m lazy,” he smiled self-deprecatingly. “So when I do two different things I like to make them one thing so it takes me less time.”

Recently, Calderbank has been applying his decades of experience at AT&T Research and Bell Labs to another form of simplification.

“I’m a physical layer guy, and I think historically we have had a tendency to design a physical layer with all sorts of bells and whistles, and then to argue that it can do all sorts of things – in fact, it can do anything you want it to do! And with sufficient heroism that’s probably true, but I think it’s more useful to work backwards from the higher layers.”

As a result, Rob has been investigating how machine learning can be used to improve the air interface but also what kind of physical layer lends itself to machine learning as we currently understand it.

“If you were designing the wireless physical layer that machine learning applications wanted to see, what would it be? I would argue that pulsones and OTFS is actually that system,” he explained. “Drawing people who are expert in machine learning into thinking about wireless is really important at a time when it’s becoming more and more difficult to estimate channels.”

What is a pulsone?

In a recent lecture, Calderbank went back to the days of radar, describing waveforms sent out into the blue as questions, with the returns giving answers, with the ultimate objective of prediction. For radar, the questions are “Where are things and where are they going?”.

Different kinds of waveform are better at answering different questions: a pulse in the time domain is good for identifying delay in the return – that is, how far away an object is. Meanwhile, a pulse in the frequency domain can identify movement in an object by the way it dopplers the returning frequencies.

A pulsone, to Calderbank, is a pulse that’s good for channel spread in delay and doppler – in other words, something very similar to a radar’s requirement. Up to now the telecoms environment has tended to focus on either time or frequency. This would be an opportunity to use both.

…and why?

“Every time we have a new wireless standard, it’s an opportunity to ask fundamental questions,” Calderbank pointed out.

In this case, the fundamental question is whether channel estimation is becoming outdated as a solution to effectively interpret the information coming back to radio receivers. As we move up into higher frequencies, channel estimation is getting increasingly difficult.

“It’s becoming more and more complicated to figure out the channel and the dopplers are getting quite serious,” Calderbank noted.

“If you wanted to turn mmWave into something mobile or with just a little bit of movement, then you are looking at something like three kilohertz. That’s a lot of doppler! If you don’t have to estimate these channels, I think it’s a win.”

So what’s the alternative to channel estimation? Channel estimation is a process of creating a model to understand the environment fundamentally, then applying that model to the signals received. Until recently, that’s been the only way we could sensibly understand the signals coming back from the outside world.

However, with AI these days it’s possible to depend not on an a priori model but on developing predictions based on actual inputs – using a model-free approach.

Calderbank explained how this can work in the context of pulsones.

“You start off saying, ‘I’m interested in this channel. It has a certain delay spread and a certain doppler spread, so I’m going to build pulsones inside my little fundamental period.’ My delay period – the width of the box – is going to capture the delay spread. So the delay spread’s going be less than the delay period. And similarly for doppler, the doppler spread of the channel’s going to be less than my doppler period; and so the channel lives inside that box.

“If you signal a frame of data with these pulsones and you think about the input-output response frame to frame, that’s really just a matrix – a matrix with certain symmetries that’s easy to learn.

“And that’s what AI algorithms like to see. The successful AI algorithms glom on to low-dimensional matrices and that’s what makes them successful. That’s why they’ve been able to revolutionise image processing, for example, and natural language processing.”

Is that a good thing, though? Ultimately, is training AI to understand the environment going to be more compute-intensive and more overhead than channel estimation? Not surprisingly, Calderbank doesn’t think so.

“So we have these domains: The time domain, the frequency domain and this delay-doppler domain. These three domains are like the three vertices of a triangle, and they’re all connected by transforms. These transforms all boil down to the Fourier transform in the end,” he explained.

“So it’s actually not really more complicated to operate in the delay-doppler domain because at a certain point it’s Fourier transforms all the way down.”

Reading your environment

For those of you who’ve been paying attention, the talk about radar seems vaguely familiar. As conversation about possible 6G spectrum use has progressed, discussion about sub-THz frequencies for use in joint sensing and communication has been rife. However, pulsones which are able to interpret both distance and movement would, in effect, act as a radar, wouldn’t they?

“After I left AT&T, I worked for a little while with folk in Australia’s defense, science and technology organization who did phase-coded waveforms for radar. What are phase-coded waveforms? Well, they look very like pulsones and they’re modulated by tones.”

This might send some people into a panic – the mobile companies are tracking us with 6G radar! However, there could well be uses we can put this to which would be very helpful for a future environment full of connected items. For example, managing both communications and air-traffic control for drones, or collision avoidance between people and robots on a factory floor.

In many ways this is a necessity for 6G; recent events such as 6GSymposium, One6G Summit and NYU’s Brooklyn Summit have all seen a drumbeat of messaging that technology for its own sake is not enough; the business models and use cases need to be baked in and understood well in advance, and more than just theoretically.

“The way to unlock progress is to get carriers excited about what might be,” Calderbank agreed.

“I mean, in 4G the proposition ‘Spend however many billion dollars to move voice to a new economic model’ was just not a compelling argument. It has to be that you’re opening up new applications, new sources of opportunity.”

So there are, potentially, ways in which this could be an opportunity. However, it’s not unreasonable that people should be worried about a lack of security if pulsones are reading them like radar. On the other hand, there are compensating features to Calderbank’s ideas – which, as you’ll recall, include the concept that pulsones and OTFS together create a physical layer suitable for AI.

“One of the features of OTFS is making a picture of your propagation environment. When we communicate, we derive our shared input response and it’s almost like that’s a shared secret. And so I think that there’s all sorts of uses you can put this to,” he pointed out.

This is an intriguing suggestion, as that shared secret could act as a different basis of encryption, for example between devices. For devices which are moving in relation to each other that shared secret would be changing over time, creating a constantly changing basis for encryption.

At the same time pulsones tracking location and movement could help with identity – if a device which has been moving predictably suddenly appears to change location by teleporting, it could be an indication of an attempted man-in-the-middle or spoofing attack.

Methods of managing authentication and security like this are something which mobile infrastructure providers and operators would be uniquely able to support, opening up new business models and services.

Arguably, this speaks to one of the greater ongoing challenges in the telecoms environment – converting technical capabilities into services and revenue. This isn’t getting any easier, as Calderbank emphasised. 

“One of the problems with telecommunications layering out is that it becomes harder for physical layer guys to think about how to make carriers successful. They’re not the same company any more but somehow, in order to create a big new industry, they have to do that.”

Recent Posts

Guest Post: Navigating the IoT security landscape

Guest Post: Navigating the IoT security landscape

By Iain Davidson, senior product manager, Wireless Logic According to IDC, spend on the internet of things (IoT) could reach almost $345 billion by 2027. The fastest adoption will be in applications such as irrigation and fleet management, with prominent use cases in...

Key Value Indicators – Making Good Business

Key Value Indicators – Making Good Business

One of the most original and most overlooked features of 6G is the involvement of Key Value Indicators [KVIs] in its development. However, KVIs may hold the key to revamping the fortunes of the telecoms industry. Key Value Indicators were introduced as a concept into...

Pin It on Pinterest

Share This