Thank You Dear Prof. Emil and Prof. Erik for this meaningful and enlightening discussion about Machine Learning. From this Conversation, I can deduce that the significance of Communication domain knowledge is not going to decrease and that Comm. theory is not going to be dumped. (At least for Physical Layer.)
Thanks Emil and Erik for a wonderful and enlightening talk. I am currently writing a paper no the interplay between optimization and machine learning in signal processing and this really clears lots of questions. As for a certain question you had about what should people learn in school to be good communication engineers? as a graduate student who is currently studying communication and electrical engineering. I feel that if I had taken in school (college/ high school ) an introduction to information theory, an intuitive understanding of linear algebra, and of course some insight on machine learning. My life would have been much easier. Not to mention optimization, I think any engineer must have a thorough understanding of how to formulate and tackle mathematical optimization problems.
An excellent conversation on basic terms of machine learning and the scope of machine learning in wireless communications. Thanks a lot to both of you for your sincere efforts.
Good job Prof Emil Björnson. One of the downsides of using deep learning in wireless communication is the availability of the dataset that can be used in a supervised sense. The deep unfolding in my opinion should go beyond just function approximation iteratively in a supervised sense. I think we should start exploring how we can reformulate the deep unfolding approach in an unsupervised manner such that some parameters are learnt to optimize the original objective function without the target labels obtained from the optimization solution using the conventional approach. I'm happy to partner with Prof Emil Björnson in this area if granted the opportunity.
Generating large data sets is indeed one of the limiting factors, which is why researchers are trying various approaches to take a small set of labeled training data and synthetically generate a larger set based on it. Deep unfolding is an unsupervised method, at least when it is applied to an algorithm that carries out function minimization. The cost function is then the average function value that you want to minimize with the original algorithm.
Most of the papers I have seen so far use the deep unfolding in a sort of supervised sense in which the loss function is either the mean squared error or mean absolute error between the predicted output and the true target labels. It will be an excellent research direction if the method could be explored via a functional optimization approach in a supervised manner.
Great episode. My question is: Deep learning would need a lot of time to be trained (of course it depends on the scenario but generally it seems that it would take more time than classical approaches), so does the improvement we get from deep learning methods outweigh their costs in terms of larger delays? I also want to mention that since the environment is dynamic and changing in wireless communications, we can't train our model once and use it forever.
Isn't it Reinforcement Learning (RL) you discussed at 23:15? I believe RL has a future in autonomous decision-making in wireless communications. What are your views on this?
Yes, I suppose that learning how to bike through trial-and-error is resembling reinforcement learning. I agree with your statement about autonomous decision-making. There are many resource allocation problems in wireless networks, and these become increasingly complex as we add new constraints (e.g., due to network slicing), new variables (e.g., due to network function virtualization), and random traffic dynamics. If we cannot afford solving these problems to optimality but are forced to use suboptimal low-complexity algorithms, then it is likely that RL can identify ways to outperform the low-complexity algorithms previously designed by engineers. One just has to be careful when it comes robustness since machine learning is good at managing typical inputs, but less good at handling unseen inputs.
Great podcast. My query is not related to deep learning. It is some what related to 5g and massive mimo. Why frame duration is kept as 10ms, is it related to massive mimo or coherence block(coherence bw & time)?
The frame duration is mainly determined by the user mobility (size of the coherence block), which isn't changing as we introduce Massive MIMO. However, if one wants to reduce latency, it is relevant to reduce the frame duration to enable decoding the data block more quickly. 5G supports things such as mini slots for that purpose.
Dear Professors! I am an engineer in telecom industry (simultaneously I am studying PhD in telecom engineering). I am doing research related to machine learning in mobile communications. Can you give me any ideas related to channel estimation aided by machine learning? Thank you so much
Hi! Channel estimation is a classical signal processing problem where there exist many model-aided Bayesian solutions and model-free solutions (for deterministic variables). So machine learning is only useful when there is some channel structure that exist but we cannot model in a way that makes classical methods intractable. I’m sure one can find such scenarios but I don’t know how practically relevant they are. ITU had a channel estimation with machine learning competition last year. It was about learning spatial structures, like what scattering objects exist in the propagation environment.
@@WirelessFuture I want to write the code for eq 12 , but I don't know how to generate the dataset for this..like y, beta, small scale fading coefficient, large scale fading coeff,etc. And how can I have a data of theta here. Please tell me how can I try to code this. This is the paper, I m trying to code. documentcloud.adobe.com/link/review?uri=urn:aaid:scds:US:581dc9cb-2ded-466f-adcb-4359daa5b8f
Since your question is connected to a particular paper, I recommend you to contact the authors of that papers if you have any questions. I guess that the curves in the paper are generated by using the formulas in (12) for a fixed values of (13)-(15) and varying gamma' to get a curve. What you need to do is selecting the parameters that appear in (14) and (15). I suppose that the values that the authors used are written out in the paper. If not, I suggest that you contact the authors and ask what numbers they used.
Machine learning is not equal to deep neural networks. Learning the parameters of probabilistic models is machine learning too. Once people figure out how to train probabilistic models effectively, I think they will dominate the ML again.
Thank You Dear Prof. Emil and Prof. Erik for this meaningful and enlightening discussion about Machine Learning. From this Conversation, I can deduce that the significance of Communication domain knowledge is not going to decrease and that Comm. theory is not going to be dumped. (At least for Physical Layer.)
Thanks Emil and Erik for a wonderful and enlightening talk. I am currently writing a paper no the interplay between optimization and machine learning in signal processing and this really clears lots of questions.
As for a certain question you had about what should people learn in school to be good communication engineers? as a graduate student who is currently studying communication and electrical engineering. I feel that if I had taken in school (college/ high school ) an introduction to information theory, an intuitive understanding of linear algebra, and of course some insight on machine learning. My life would have been much easier. Not to mention optimization, I think any engineer must have a thorough understanding of how to formulate and tackle mathematical optimization problems.
An excellent conversation on basic terms of machine learning and the scope of machine learning in wireless communications. Thanks a lot to both of you for your sincere efforts.
Please and chapters to your video. It really helps to understand content and navigate.
Thank you guys for spreading the knowledge. Very appreciated.
Good job Prof Emil Björnson. One of the downsides of using deep learning in wireless communication is the availability of the dataset that can be used in a supervised sense. The deep unfolding in my opinion should go beyond just function approximation iteratively in a supervised sense. I think we should start exploring how we can reformulate the deep unfolding approach in an unsupervised manner such that some parameters are learnt to optimize the original objective function without the target labels obtained from the optimization solution using the conventional approach. I'm happy to partner with Prof Emil Björnson in this area if granted the opportunity.
Generating large data sets is indeed one of the limiting factors, which is why researchers are trying various approaches to take a small set of labeled training data and synthetically generate a larger set based on it.
Deep unfolding is an unsupervised method, at least when it is applied to an algorithm that carries out function minimization. The cost function is then the average function value that you want to minimize with the original algorithm.
@@WirelessFuture Yes, exactly. Thank you Prof.
Most of the papers I have seen so far use the deep unfolding in a sort of supervised sense in which the loss function is either the mean squared error or mean absolute error between the predicted output and the true target labels. It will be an excellent research direction if the method could be explored via a functional optimization approach in a supervised manner.
Great episode. My question is: Deep learning would need a lot of time to be trained (of course it depends on the scenario but generally it seems that it would take more time than classical approaches), so does the improvement we get from deep learning methods outweigh their costs in terms of larger delays? I also want to mention that since the environment is dynamic and changing in wireless communications, we can't train our model once and use it forever.
Thanks guys, this podcast is awesome!
Great podcast. Thanks a lot to both of you.
Isn't it Reinforcement Learning (RL) you discussed at 23:15? I believe RL has a future in autonomous decision-making in wireless communications. What are your views on this?
Yes, I suppose that learning how to bike through trial-and-error is resembling reinforcement learning. I agree with your statement about autonomous decision-making. There are many resource allocation problems in wireless networks, and these become increasingly complex as we add new constraints (e.g., due to network slicing), new variables (e.g., due to network function virtualization), and random traffic dynamics. If we cannot afford solving these problems to optimality but are forced to use suboptimal low-complexity algorithms, then it is likely that RL can identify ways to outperform the low-complexity algorithms previously designed by engineers. One just has to be careful when it comes robustness since machine learning is good at managing typical inputs, but less good at handling unseen inputs.
Thanks Emil and Erik, very interesting !
Thank you for the informative talk.
Great podcast. My query is not related to deep learning. It is some what related to 5g and massive mimo. Why frame duration is kept as 10ms, is it related to massive mimo or coherence block(coherence bw & time)?
The frame duration is mainly determined by the user mobility (size of the coherence block), which isn't changing as we introduce Massive MIMO. However, if one wants to reduce latency, it is relevant to reduce the frame duration to enable decoding the data block more quickly. 5G supports things such as mini slots for that purpose.
My Heros....
Dear Professors! I am an engineer in telecom industry (simultaneously I am studying PhD in telecom engineering). I am doing research related to machine learning in mobile communications. Can you give me any ideas related to channel estimation aided by machine learning? Thank you so much
Hi! Channel estimation is a classical signal processing problem where there exist many model-aided Bayesian solutions and model-free solutions (for deterministic variables). So machine learning is only useful when there is some channel structure that exist but we cannot model in a way that makes classical methods intractable. I’m sure one can find such scenarios but I don’t know how practically relevant they are. ITU had a channel estimation with machine learning competition last year. It was about learning spatial structures, like what scattering objects exist in the propagation environment.
@@WirelessFuture thank you so much for your nice answers
Hello sir. Sir I am facing problem in coding Matlab code for MIMO ROC plots.Can you pls help how to proceed witj a demo code?
You can find plenty of code here: github.com/emilbjornson
But unfortunately, I don’t think any of it is matching with what you are looking for.
@@WirelessFuture sir can you please tell me how to generate the dataset for both hypothesis Ho and H1 in Matlab in code?
@@linuxpatel9358 I could try but you have to describe the problem in detail first.
@@WirelessFuture I want to write the code for eq 12 , but I don't know how to generate the dataset for this..like y, beta, small scale fading coefficient, large scale fading coeff,etc.
And how can I have a data of theta here.
Please tell me how can I try to code this.
This is the paper, I m trying to code. documentcloud.adobe.com/link/review?uri=urn:aaid:scds:US:581dc9cb-2ded-466f-adcb-4359daa5b8f
Since your question is connected to a particular paper, I recommend you to contact the authors of that papers if you have any questions. I guess that the curves in the paper are generated by using the formulas in (12) for a fixed values of (13)-(15) and varying gamma' to get a curve. What you need to do is selecting the parameters that appear in (14) and (15). I suppose that the values that the authors used are written out in the paper. If not, I suggest that you contact the authors and ask what numbers they used.
How? ...
Machine learning is not equal to deep neural networks. Learning the parameters of probabilistic models is machine learning too. Once people figure out how to train probabilistic models effectively, I think they will dominate the ML again.