Great video! For the RIS's, it seems to be "An exponential effort for incremental gain". May be possible but may never really be worth the trouble except in special cases it would seem. Cheers!
Yes, I think this is a fair assessment of the status quo, which focuses very much on boosting the received transmit power. But the conclusion might change if we identify a truly groundbreaking use case for the technology. RIS is a big hammer looking for a suitable nail to hit. There are many exciting research problems from an academic perspective, even if the technology might not be mature enough for 6G. We wrote about this in the magazine paper "Reconfigurable Intelligent Surfaces: Three Myths and Two Critical Questions" arxiv.org/pdf/2006.03377.pdf"
Hi Prof Emil and Prof Larson, another brilliant talk indeed. Could you please comment on the role of Antenna Design towards 6G! Usually, when we talk about 5G antennas, then antenna guys USUALLY just focus on the "frequency bands" while the performance and design considerations are the same as those of conventional antennas. As 5G is way more about the experience and services (and not about the frequency band only), and the same goes for 6G, so how an "Antenna Design Engineer" should target to 5G and 6G besides focusing merely on frequency bands (either mmWave or THz antenna designs)? Moreover, if we talk about URLLC or mURLLC for 5G and 6G, then by typically staying on PHY layer, how an antenna designer can ensure "Reliability" and "Latency" from merely the antenna point of view (without looking to other higher OSI layers parameters which dictate URLLC performance). Is it all about dynamic radiation pattern reconfigurability and beam steerability of antennas, or we also have something more than this for 6G!
Just for the understanding, for RIS you need to provide it with power and additional, you need sensor to measure the propagation environement plus a communication link to steer the RIS, right? Thank you.
Yes, you need power and a communication link. Sensors are not necessary to have in the RIS, but one can let it change configuration over time, make measurements at the receiver side, compute a preferred configuration and then feed it back to the RIS using the communication link.
Dear Professors, Can an ultra-dense network or Heterogeneous multi-band network can be seen in the future 6G technologies. I really want to know your opinion on the same.
This is mostly a commercial question; what is economically feasible to deploy and what does the operator’s customers request. One can build ultra-dense networks already today, but it haven’t happened yet since other solutions are sufficient. It is the same thing with mmWave; 5G in the 3 GHz band is sufficient so far. A main issue with ultra-dense networks is interference between cells. The research that deals with this is called cell-free massive MIMO or distributed MIMO. This is a popular 6G research topic.
Regarding full duplex, why does it have to be on the same frequency? Seems why not have a sub-channel (smaller) that is used for the uplink (which in most cases uplink traffic is much less than downlink) and have the Tx/Rx via FDD? I noticed in WiFi, we have devices that will do 2.4/5GHz and now 6GHz. 3 bands but even with WiFi7, still no full duplex. Seems it would be helpful to have even a smaller channel on one band for uplink while the other band for downlink. Because a large amount of frequency separation, it wouldn't require as much technology to work. Why do you think they aren't going that route? Is there a technical limitation or is it not worth doing because the TDD is so fast they figure not much benefits? Thoughts?
The definition of full duplex is that the same time-frequency resources are used for both uplink and downlink. So it has to be the same frequency at the same time to have full duplex. But is is fair point: Do we need it? Will the potential gains vanish in practice due to technical difficulties (self interference) and traffic difference (more downlink than uplink). I believe what you describe is FDD and then the bandwidths of the uplink and downlink are fixed by the license. TDD is more flexible in terms of changing what fraction of time that is used for uplink and downlink. If I remember correctly, 75% or 66% downlink traffic is what is being considered in 5G.
Hello Prof. Emil and Prof. Erik. Thank You for sharing your discussion. It is indeed very informative. I have a query regarding Semantic Communications. I agree with your point that Semantic communications is more or less a more sophisticated combination of source and channel coding. In future, if there do occur convincing use-cases of this technology, do you suppose that existing source and channel coding algorithms can be used (with slight tweaks); or, absolutely new techniques will be required. After all, this technology will be used for communication!
I think the normal approach is to do source coding at the application layer, and resort to compression standards such as MPEG4 for video. But suppose we don’t want to watch the movie but instead do some kind of image recognition, then one can perhaps refine the video before transmission a lot to cut down on the number of bits. I think one can make many examples of where the compression can be fine-tuned for different use cases, and let machine learning help us with that. But whether this is just an ML-improved joint source and channel coding (auto encoder?) or something deeper isn’t particularly clear yet. One has to figure out if most gains come from improving what happens at the application layer or if the adaptation to the physical wireless channel is also important. It is plausible that the most important aspect is to determine how to divide the application between the transmitter and receiver.
@@WirelessFuture Yes that looks a likely solution. Hence, current studies on this topic refer to the Semantic Encoder/Decoder and the Semantic Layer (level 2) which lies between the Physical (level 1) and the application (effectiveness Layer 3 from Shannon's work). Then I wonder whether existing coding techniques would be of use or absolutely new models would be mandated.
I would guess that current channel codes will continue to work well, and that the benefits lies in refining the source compression. But I might be wrong.
Hi, this is an exciting area with a lot of practical developments related to low-orbit satellites (Starlink, OneWeb, etc.). It is a good topic for a PhD. We might talk about this in a future episode.
When talking about sending, isn’t this kind of similar how code division can also be used to measure distances? You can send the coded information with a pseudorandom sequence, and also measure the distance by see how out of sync the sequence is.
Yes, any known signal can be used to sense/measure time delays and thereby estimate the propagation distance, but some signals give better accuracy than others for a given signal power. Pseudorandom sequences are good since they have good time correlation properties; a small time shift leads to a large variation in the received signal. The good thing with code division is that it might be enough to know the spreading codes to do the estimation since the encoded information is just a scaling of the spreading code.
Yes. The term “array gain” is sometimes used in situations where one uses an array of antennas or other types of sensors to process signals, but without necessarily transmitting a physical signal.
Hi! Our best explanations of these drawings can be already found in the text of the paper. But please feel free to ask follow up questions based on what is written there.
Thank you for this insightful episode. How about using OAM in the backhaul of 6G? It seems to be a good choice for backhaul communication due to the mode orthogonality and less complexity at the receiver side
High-capacity wireless backhauls can certainly make use of multiple spatial modes. When one operates in the radiative near-field, it is possible to send multiple individually resolvable beams over point-to-point line-of-sight channels. Our reservation is that OAM is sometimes described as a new untapped dimension, although it is just a special case of MIMO. The important thing is that one operates in the radiative near field so that one can get multiple spatial modes, not whether those modes happen to have helical wavefronts or some other physical shape. At the end of the day, it is the singular values of the channel matrix that determine capacity. I recommend: en.m.wikipedia.org/wiki/Orbital_angular_momentum_multiplexing
I think that passive IRS is the practically interesting case since it has distinctly new characteristics. If one makes the IRS elements active, it will be too similar to a MIMO array: worse communication performance but rather similar hardware complexity.
Within the whole field you can often find a conversation from the square to a multiplication with its own conjugate complex. E.g. P = E {|z(t)|^2 = z(t) x z(t)^* Why can we see that so often? Only for simplification and further conversation? Thanks.
Hi! This is a standard calculation of the absolute value of a complex number. You can find more information here: en.wikipedia.org/wiki/Absolute_value#Complex_numbers
Hi! PCell from Artemis is a commercial name for a cell-free massive MIMO implementation. The company says this themselves on this website: www.artemis.com/papers We talked about that topic in Episode 13.
Hi Erik, Emil, with respect to semantic communications, I think that the idea of animating avatars for reduced bandwidth or improved quality is already a reality, e.g., th-cam.com/video/xzLHZbBvKNQ/w-d-xo.html. However, this is done on higher layers. I am therefore wondering if semantic communications is actually a physical layer problem or not.
Thanks for the input! I think that semantic communication can be interpreted as a new type of joint source and channel coding that is tightly connected to the application layer, to identify what to encode and communicate. As you point out, there is already solutions of this kind for particular problems.
Great video! For the RIS's, it seems to be "An exponential effort for incremental gain". May be possible but may never really be worth the trouble except in special cases it would seem. Cheers!
Yes, I think this is a fair assessment of the status quo, which focuses very much on boosting the received transmit power. But the conclusion might change if we identify a truly groundbreaking use case for the technology. RIS is a big hammer looking for a suitable nail to hit. There are many exciting research problems from an academic perspective, even if the technology might not be mature enough for 6G. We wrote about this in the magazine paper "Reconfigurable Intelligent Surfaces: Three Myths and Two Critical Questions" arxiv.org/pdf/2006.03377.pdf"
Hi Prof Emil and Prof Larson, another brilliant talk indeed.
Could you please comment on the role of Antenna Design towards 6G! Usually, when we talk about 5G antennas, then antenna guys USUALLY just focus on the "frequency bands" while the performance and design considerations are the same as those of conventional antennas.
As 5G is way more about the experience and services (and not about the frequency band only), and the same goes for 6G, so how an "Antenna Design Engineer" should target to 5G and 6G besides focusing merely on frequency bands (either mmWave or THz antenna designs)?
Moreover, if we talk about URLLC or mURLLC for 5G and 6G, then by typically staying on PHY layer, how an antenna designer can ensure "Reliability" and "Latency" from merely the antenna point of view (without looking to other higher OSI layers parameters which dictate URLLC performance). Is it all about dynamic radiation pattern reconfigurability and beam steerability of antennas, or we also have something more than this for 6G!
Just for the understanding, for RIS you need to provide it with power and additional, you need sensor to measure the propagation environement plus a communication link to steer the RIS, right? Thank you.
Yes, you need power and a communication link. Sensors are not necessary to have in the RIS, but one can let it change configuration over time, make measurements at the receiver side, compute a preferred configuration and then feed it back to the RIS using the communication link.
Dear Professors, Can an ultra-dense network or Heterogeneous multi-band network can be seen in the future 6G technologies. I really want to know your opinion on the same.
This is mostly a commercial question; what is economically feasible to deploy and what does the operator’s customers request. One can build ultra-dense networks already today, but it haven’t happened yet since other solutions are sufficient. It is the same thing with mmWave; 5G in the 3 GHz band is sufficient so far.
A main issue with ultra-dense networks is interference between cells. The research that deals with this is called cell-free massive MIMO or distributed MIMO. This is a popular 6G research topic.
Regarding full duplex, why does it have to be on the same frequency? Seems why not have a sub-channel (smaller) that is used for the uplink (which in most cases uplink traffic is much less than downlink) and have the Tx/Rx via FDD? I noticed in WiFi, we have devices that will do 2.4/5GHz and now 6GHz. 3 bands but even with WiFi7, still no full duplex. Seems it would be helpful to have even a smaller channel on one band for uplink while the other band for downlink. Because a large amount of frequency separation, it wouldn't require as much technology to work. Why do you think they aren't going that route? Is there a technical limitation or is it not worth doing because the TDD is so fast they figure not much benefits? Thoughts?
The definition of full duplex is that the same time-frequency resources are used for both uplink and downlink. So it has to be the same frequency at the same time to have full duplex. But is is fair point: Do we need it? Will the potential gains vanish in practice due to technical difficulties (self interference) and traffic difference (more downlink than uplink). I believe what you describe is FDD and then the bandwidths of the uplink and downlink are fixed by the license. TDD is more flexible in terms of changing what fraction of time that is used for uplink and downlink. If I remember correctly, 75% or 66% downlink traffic is what is being considered in 5G.
Hello Prof. Emil and Prof. Erik. Thank You for sharing your discussion. It is indeed very informative.
I have a query regarding Semantic Communications. I agree with your point that Semantic communications is more or less a more sophisticated combination of source and channel coding. In future, if there do occur convincing use-cases of this technology, do you suppose that existing source and channel coding algorithms can be used (with slight tweaks); or, absolutely new techniques will be required. After all, this technology will be used for communication!
I think the normal approach is to do source coding at the application layer, and resort to compression standards such as MPEG4 for video. But suppose we don’t want to watch the movie but instead do some kind of image recognition, then one can perhaps refine the video before transmission a lot to cut down on the number of bits. I think one can make many examples of where the compression can be fine-tuned for different use cases, and let machine learning help us with that. But whether this is just an ML-improved joint source and channel coding (auto encoder?) or something deeper isn’t particularly clear yet. One has to figure out if most gains come from improving what happens at the application layer or if the adaptation to the physical wireless channel is also important. It is plausible that the most important aspect is to determine how to divide the application between the transmitter and receiver.
@@WirelessFuture Yes that looks a likely solution. Hence, current studies on this topic refer to the Semantic Encoder/Decoder and the Semantic Layer (level 2) which lies between the Physical (level 1) and the application (effectiveness Layer 3 from Shannon's work).
Then I wonder whether existing coding techniques would be of use or absolutely new models would be mandated.
I would guess that current channel codes will continue to work well, and that the benefits lies in refining the source compression. But I might be wrong.
@@WirelessFuture Thanks for your responses. Looking forward to the next episode.
Hello professor, I have one question, what do you think about satellite communication as NGN, I am planning to do my PhD in this field.
Hi, this is an exciting area with a lot of practical developments related to low-orbit satellites (Starlink, OneWeb, etc.). It is a good topic for a PhD. We might talk about this in a future episode.
@@WirelessFuture
That is amazing professor, can't wait for that episode.
Thank you very much.
Thank you for the very informative discussion. As another possible topic, I'd like to hear your thoughts on OTFS.
Thank you, do you always use single antenna access points with cell-free mMIMO?
No, the textbook arxiv.org/pdf/2108.02541.pdf provides a theory for having any number of antennas
When talking about sending, isn’t this kind of similar how code division can also be used to measure distances? You can send the coded information with a pseudorandom sequence, and also measure the distance by see how out of sync the sequence is.
Yes, any known signal can be used to sense/measure time delays and thereby estimate the propagation distance, but some signals give better accuracy than others for a given signal power. Pseudorandom sequences are good since they have good time correlation properties; a small time shift leads to a large variation in the received signal. The good thing with code division is that it might be enough to know the spreading codes to do the estimation since the encoded information is just a scaling of the spreading code.
The array gain is the same like the beamforming gain, is that right?
Thank you.
Yes. The term “array gain” is sometimes used in situations where one uses an array of antennas or other types of sensors to process signals, but without necessarily transmitting a physical signal.
hello professor
Can you explain the results of the drawings in the paper related to the facts and myths reconfigurable Intelligent surfaces?
Hi! Our best explanations of these drawings can be already found in the text of the paper. But please feel free to ask follow up questions based on what is written there.
@@WirelessFuture
I'm a bachelor's student
And my graduation project talks about this technique
Can I have your email address to contact you?
Thank you for this insightful episode. How about using OAM in the backhaul of 6G? It seems to be a good choice for backhaul communication due to the mode orthogonality and less complexity at the receiver side
High-capacity wireless backhauls can certainly make use of multiple spatial modes. When one operates in the radiative near-field, it is possible to send multiple individually resolvable beams over point-to-point line-of-sight channels. Our reservation is that OAM is sometimes described as a new untapped dimension, although it is just a special case of MIMO. The important thing is that one operates in the radiative near field so that one can get multiple spatial modes, not whether those modes happen to have helical wavefronts or some other physical shape. At the end of the day, it is the singular values of the channel matrix that determine capacity. I recommend: en.m.wikipedia.org/wiki/Orbital_angular_momentum_multiplexing
great discussion 😌 can you something about the active vs passive IRS
I think that passive IRS is the practically interesting case since it has distinctly new characteristics. If one makes the IRS elements active, it will be too similar to a MIMO array: worse communication performance but rather similar hardware complexity.
Within the whole field you can often find a conversation from the square to a multiplication with its own conjugate complex. E.g. P = E {|z(t)|^2 = z(t) x z(t)^*
Why can we see that so often? Only for simplification and further conversation?
Thanks.
Hi! This is a standard calculation of the absolute value of a complex number. You can find more information here: en.wikipedia.org/wiki/Absolute_value#Complex_numbers
hi master student in wireless if possible can you do an analysis on pcell from artemis keep to hear your thoughts thanks
Hi! PCell from Artemis is a commercial name for a cell-free massive MIMO implementation. The company says this themselves on this website: www.artemis.com/papers
We talked about that topic in Episode 13.
Hi Erik, Emil, with respect to semantic communications, I think that the idea of animating avatars for reduced bandwidth or improved quality is already a reality, e.g., th-cam.com/video/xzLHZbBvKNQ/w-d-xo.html. However, this is done on higher layers. I am therefore wondering if semantic communications is actually a physical layer problem or not.
Thanks for the input! I think that semantic communication can be interpreted as a new type of joint source and channel coding that is tightly connected to the application layer, to identify what to encode and communicate. As you point out, there is already solutions of this kind for particular problems.