Exclusives : The OTFS Interview – Implications of a 6G Candidate Technology

The OTFS Interview – Implications of a 6G Candidate Technology

Discussion around OTFS (Orthogonal Time Frequency and Space) is picking up. For example, the IEEE recently launched a call for papers ahead of a workshop on “OTFS for 6G and Future High-Mobility Communications” next June. This workshop is being organised by a team of academics from across Australia, China, Italy, India, and the USA. 6GWorld caught up with Professor Ronny Hadani of the University of Texas, Austin, to understand the significance and role of OTFS.  

Hadani originally developed OTFS modulating techniques. He is also co-founder of Cohere Wireless, a company mentioned later in the interview. 

What exactly is OTFS, and what makes it significant at this time? 

OTFS is a waveform. When you want to send information, eventually you might want to send it over the air. The carrier is an electromagnetic pattern. For example, in Code Division Multiple Access (CDMA) the carrier was a pulse. You take your bit, put it on a pulse, the pulse propagates and carries the bit. In OFDM (Orthogonal Frequency-Division Multiplexing) it’s not a pulse, it’s a tone, a kind of waveform. Again, this propagates through the air and carries your bit.  

You have to understand that the shape of the wave is very important. It’s like the type of car that you choose to carry you. If you need to travel over a very flat road, you can use a racing car and it will carry you quickly and safely. If you need to go over mountains or rugged terrain, you should take a very different type of car that can endure all the bouncing.  

OFDM is a very good carrier over many types of “road”, but then you come to demands of very high mobility; and there, it turns out, both the pulses and the tones of CDMA and OFDM get into trouble.  

Mostly, when a waveform goes through an environment, the environment distorts the waveform. What you get at the receiver may be a very distorted, twisted waveform that’s harder to interpret. This is the phenomenon we call fading.  

What makes the OTFS waveform unique is that it’s basically oblivious to distortion. It can go through the environment; the environment tries to distort it, but the waveform is shaped in such a way that no matter what the environment does to it the waveform doesn’t change its shape.  

Think about a sphere; it’s a very unique shape. You can rotate it any way you want – this is the distortion – but it still comes out a sphere. It’s mathematically invariant to this type of operation. And that, in analogy, is what OTFS is – it’s the equivalent of a sphere in the context of these operations that the environment is trying to do to it, and in this analogy the receiver can pick it up through a round hole no matter what’s happened on the journey.  

It’s like you’re sending out packets in little balls through the air and the environment can try to rotate it, but you still end up with a ball. If you use OFDM it’s like sending packets in a cube. If you rotate it the receiver doesn’t get a square profile, it gets a diamond or a rectangle or so on.  

It’s not a perfect analogy, but that, in summary, is what makes OTFS so adaptive to complicated communications. I think this is one reason why OTFS is so exciting right now.  

Only one reason? 

There are other aspects that make OTFS exciting, I think. One aspect is that, mathematically speaking, OTFS brings together two disparate aspects of signal processing. One is communications theory; the other is radar theory.  

OTFS is at the same time a waveform which allows you to do communication and a waveform which allows you to do state-of-the-art radar. Simultaneous communication and sensing. And when I say state-of-the-art I mean military-grade. It’s a radar that not only detects the range from me to you but also detects your velocity. I can send an OTFS signal in your direction and I can actually detect your pulse.  

This is also exciting because traditionally the radar community and telecoms community were very far apart, but 6G is bringing both communities to a convergence. That means it’s doubly useful in V2X (Vehicle-to-everything) environments, because cars will need to coordinate with one another and with their environment. You can perform sensing using television and visible light, but you can do it better using radar. Communicating at the same time is a big bonus for efficiency.  

Anything else? 

Really, I think the most exciting element is the scientific part. Over time you build up a certain way of thinking about problems and eventually it saturates your ability to innovate. Eventually you’ve covered everything you can manage from this way of thinking. And we are at that point, by the way.  

At the PHY [physical] level in 5G we’re still doing the same things as 4G, so people need a new way of thinking about this to further innovate. That’s what the OTFS paradigm allows you to do.  

The reason I say that is this: OTFS is a fundamental paradigm shift in terms of the mathematical structure for data sending and in our understanding of what radio communication is. This is exciting for academia because it gives you a new way to think about almost everything in the context of communications. 

The current paradigm is about frequency and time. When you think about time and frequency, it’s not like space. It’s not something you can chart on x and y axes. If you draw x and y axes in space, you can put your finger on a specific point. These are orthogonal to each other; you can travel in x and y any way you want with great specificity, but time and frequency do not behave like that. Time and frequency relate to each other in a more subtle manner.  

This is the uncertainty principle in action. If you want to be accurate in time, you will become  ambiguous about frequency – and vice versa. If you want to be precise about frequency you have to smear that across time. Mathematically it’s very peculiar – it’s a partial information plane. If time and frequency behaved like x and y, the whole theory of signal processing would be like high-school maths and much less complicated.  

The fundamental mathematical theory behind OTFS is that time and frequency do behave like a plane, but in a more sophisticated way. As it turns out there is a certain transformation, a certain way to think about time and frequency exactly like x and y after a certain twist; it’s not important here exactly how, but that’s the new way of thinking that OTFS brings to the table. The thing I want to convey is that OTFS theory allows you to think about time and frequency the same way you think about x and y.  

That’s very powerful because now you can start to deal with time and frequency the way you deal with x and y. For example, you can build pulses which are pulses both along time and along frequency at the same time. It’s a bit more sophisticated, but it can behave exactly like something which is a pulse in both dimensions.  

Why does the IEEE call for papers request proposals for OTFS in 6G and mobility? Does the community see 6G and high mobility being closely linked? 

5G originally was linked to high mobility, but I think that was pushed aside because 5G is essentially OFDM. On the PHY layer 5G was a very disappointing outcome; there are higher layers where 5G is bringing a lot of flexibility, but on the PHY layer, the most fundamental layer, 5G is a disappointing generation. It didn’t really do anything – unlike what happened in 4G, in 3G, and in 2G. 2G shifted from analogue waveforms to digital waveforms; 3G produced CDMA; 4G produced OFDM; but 5G was stuck with OFDM. So there’s no innovation at the fundamental level, and mobility does require innovation at the fundamental level.  

This is why, I think, people are linking 6G with extreme mobility. I think mobility’s a benchmark.  

Cohere has been promoting their delay-doppler processing to improve cell capacity for the last few years, by enabling frequency re-use and targeting devices accurately. At the same time, any future multipoint-to-multipoint network using cloud RAN would also have to target accurately. How does OTFS relate to that? 

I’m glad you asked, because this is a very important issue. You can argue that high mobility is simply a vertical use case. I think people understand that, in order to improve capacity for the whole network by the next order of magnitude, not just 10% or 50%, you need to coordinate base stations and break down the cellular model. 

But there is a fundamental physical problem in making multiple base stations behave like one base station. Once you need the base stations to coordinate with each other, you extend the latency of your system.  

Today, communicating between the device and one base station, the lag is a few hundred microseconds or milliseconds. In that time the environment around the device doesn’t really change – I’m talking here about microscopic changes relating to, for example, physical attenuation. In these very short timescales it’s pretty static. So you can make decisions at the base station that you can apply to the device a few milliseconds later and it will be the correct decision or a good approximation of it.  

When you go to a multi-base station situation you want them to coordinate so that the capacity every user gets is higher. For this coordination at the base stations, you turn a few hundred microseconds into dozens of milliseconds. Now, instead of half a millisecond you’re talking about maybe 50 milliseconds of lag. In that time the microscopic environment around the device can change significantly. So in order to provide effective coordination you need to be able to say how the environment will look 50 milliseconds into the future. You can’t assume that the environment will stay the same because for sure it’s going to change. You need to be able to predict.  

This is a fundamental problem; if you can’t predict 50 milliseconds into the future you cannot go to this paradigm at all. And this is the same issue with all the cloud services; if you can’t predict, it’s a fundamental obstruction. You can do cloud computing, but you can’t use it in the effective way you want [for cloud RAN or low-latency services].  

Then comes delay-doppler. When you use a delay-doppler transformation and coordinate system, which is the OTFS way of thinking about the environment and the waveform, it allows us to represent the environment in a way that is much more predictable. The fact that the environment looks unpredictable in frequency or unpredictable in time is a mirage. The environment is very predictable, but you need to view it using the correct coordinate system.  

This is using the whole OTFS paradigm, but not necessarily the waveform. The whole delay-doppler paradigm brings predictability to the table in this context of collaborative networks – and that’s what Cohere is doing today by the way. They’re taking this delay-doppler part of OTFS, which can be applied to any standard or waveform, exactly in the context of prediction and doing collaborative MU-MIMO (multi-user, multiple input, multiple output).  

So it sounds like OTFS is promising candidate for the next generation of networks.  

I think OTFS should be part of the next standard, because it answers particular use cases in the best way. I think the next standard will be a multi-waveform standard, though – for example, Wi-Fi works very well in many use cases and OFDM is optimal for certain things. So OTFS will be part of an arsenal of tools for the networks to use in different environments.  

One of the big things about delay-doppler today is that it doesn’t require any collaboration from the handset. Think about that for a second – it means that it will work with 3G and 4G and 5G. What will it mean to apply MU-MIMO technology to 4G? User equipment (UEs) in 4G aren’t aware of MU-MIMO, but we’re able to put software in the base station and create an MU-MIMO network with UEs that don’t even know they’re part of it. You double capacity in the system, but you don’t require anything from them besides to behave in the way they’re used to.  

Once you don’t require any collaboration with the UE it means that the technology is oblivious to the generations of the devices using it. Now you can have devices on LTE, 5G, OTFS, and so on all oblivious to the fact that they are using a different waveform from the device next to them. And so the capacity goes up and up and up. This is why I think 6G should be a multi-waveform standard.  

One implication of that is to sustain devices over a longer lifespan. It also enables the fusion of 5G and 6G to work together.  

It sounds like OTFS is reaching maturity. What does the timeline for further development look like? 

Everything you do using OFDM or CDMA will need to be adapted to use OTFS. There’s a long list of very exciting things people developed for OFDM that PhD students will adapt to the context of OTFS. The fusion of these technologies will happen very quickly in the next few years. This is the easy part.  

Then I think the more complex part will be to bring it closer to a real industrial use, to make it scalable, worth investing money in. It’s never about the technology; it’s about other things. In this case, I think it will depend on the question: “Is 5G living up to its promises?” If it is, then 5G will be around a long time. If not, then OTFS and 6G will be accelerated much, much faster. 




To reserve your ticket please fill out the registration form