Fabulous episode. I loved this part "Karpathy: Deep RL doesn't work, yet [[for me]] / Zahavy: ...It works" :) Yeah, I can't help but wonder how these voices would ever be heard without your show. This must be true for 99% of scientists doing great work but with no exposure.
I'm really liking the breadth of topics you guys are covering. Also your video production and quality of discussions never disappoints. Looking forward to the next hour and a half!
Wow, really love this episode. I think the guest has great insight and answers the world needs to hear. And the hosts are great at asking the right questions! 👏
This is amazing work that I keep coming back to over-and-over again. It's inspired me to get out of boring accounting projects and get back into Kaggle projects so I can pretend to start understanding all these pioneering papers and concepts this channel consistently elevates and delivers on. Thank you Dr Scarfe!
Awesome episode. I'm really enjoying the discussion format and appreciate all the work you guys are putting into this. It's perfect for getting a rough overview on some ML topic and knowing about open questions and where the research is headed. I hope you are also gaining a lot from doing this and continue with this podcast!
Your question about stationarity seems to have gone but it just means changing statistical properties over time. Most ML models assume a stationary distribution. On financial datasets one of the preprocessing tricks is to remove non-stationarity i.e. a trend or seasonality. As you can imagine in the RL world we are talking about changing state-action spaces which is a whole new world of pain 😃 Imaging a simple example that the dog is moving around the world over time
@@machinelearningdojo Haha thanks for the answer nonetheless. It became more clear to me throughout the video. Although, I understood it more in the sense that when applying the meta-model to a new RL learning task will result in a distribution shift, instead of a distribution shift in one particular task. I guess I shouldn't have deleted my comment. It would be nice to clarify this.
Where to find simplest code implementation/coding tutorials on this.... math symbols makes zero sense to me but I get clear picture when I code. If anyone has any information on a very simple code implementation or help me please reply. My immense gratitude to thee for help in advance~ Thanks!!
@@MachineLearningStreetTalk But still, I think the quality is great. You have less noob-score in editing compared to other creators out there. Right, Ms. Coffee Bean? 😅
This got me thinking, perhaps it is utterly ridiculous to use a single simple algorithm for all of training. That puts the onus almost entirely on the model, which is likely one of the reasons why we create multi-billion parameter neural networks that are out performed by children. Think about the diversity of things that motivate us, intimidate us, or otherwise influence our behavior. They're mostly emergent from our social structure, right? Or are we just trying to conserve entropy all the way through? I'm not convinced either way, entropy might be the fundamental thing underlying intelligence for all we know. I've seen something kin to this discussed in the context of AI-alignment and meso optimizers, which may be an interesting follow up topic.
Thank you so much for pointing this out. I feel sure that I must have seen it written down as "Tomas" somewhere, but I can't find it. I have changed it to "Tom".
Guys, I am stressed... Who is a Data Scientist, Please make an episode to clarify? The one who.. 1. Knows Keras/PyTorch APIs and ensemble with LightGBM for convergence 2. Got a good Kaggle Ranking and a TH-cam channel with 1000+ followers 3. Aware but never understood the probabilistic nature of everything, also quite unsettled. 3a. He knows the answer for questions like "What is a bijector", "When Covariance matrix should be positive semi-definite" 4. Wrote SQL queries until 2020, it is 2021: Naturally he is a Data Scientist 5. No, there is no such person called Data Scientist in the known universe
Fabulous episode. I loved this part "Karpathy: Deep RL doesn't work, yet [[for me]] / Zahavy: ...It works" :)
Yeah, I can't help but wonder how these voices would ever be heard without your show. This must be true for 99% of scientists doing great work but with no exposure.
Thanks Daven, always a pleasure to hear from you my friend. We think that scientists and engineers are the heros of our time.
I'm really liking the breadth of topics you guys are covering. Also your video production and quality of discussions never disappoints. Looking forward to the next hour and a half!
Cheers Peter! We really appreciate it!
Wow, really love this episode. I think the guest has great insight and answers the world needs to hear. And the hosts are great at asking the right questions! 👏
Thanks Letitia! It means a lot coming from you!
These types of channels are very powerful.
This is amazing work that I keep coming back to over-and-over again. It's inspired me to get out of boring accounting projects and get back into Kaggle projects so I can pretend to start understanding all these pioneering papers and concepts this channel consistently elevates and delivers on. Thank you Dr Scarfe!
Excited about this one!!! Here we go!!! ✌😜😜🙌🙌💥💥
Sweet! Dr Zahavy is a north star! Great episode!
Love the intro Dr Scarfe 😍
Awesome episode. I'm really enjoying the discussion format and appreciate all the work you guys are putting into this. It's perfect for getting a rough overview on some ML topic and knowing about open questions and where the research is headed. I hope you are also gaining a lot from doing this and continue with this podcast!
Your question about stationarity seems to have gone but it just means changing statistical properties over time. Most ML models assume a stationary distribution. On financial datasets one of the preprocessing tricks is to remove non-stationarity i.e. a trend or seasonality. As you can imagine in the RL world we are talking about changing state-action spaces which is a whole new world of pain 😃 Imaging a simple example that the dog is moving around the world over time
@@machinelearningdojo Haha thanks for the answer nonetheless. It became more clear to me throughout the video. Although, I understood it more in the sense that when applying the meta-model to a new RL learning task will result in a distribution shift, instead of a distribution shift in one particular task. I guess I shouldn't have deleted my comment. It would be nice to clarify this.
Wonderful content this week. The modular vs. monolithic discussion was on point.
That is a damn good episode, thanks guys!
Love the visuals
Is there a recommended library to play with Meta-Learning outside of RL ? Preferably something that works with pytorch
Where to find simplest code implementation/coding tutorials on this.... math symbols makes zero sense to me but I get clear picture when I code. If anyone has any information on a very simple code implementation or help me please reply. My immense gratitude to thee for help in advance~ Thanks!!
Oh my god.. Yannic codes on 2 computers!
Dope
I love you Yannic
Quite distracting when the camera switches to random peoples faces when someone is talking, imo.
Thanks for the feedback, we are noobs at video editing and learning as fast as we can
I disagree and really appreciate the production quality.
@@MachineLearningStreetTalk But still, I think the quality is great. You have less noob-score in editing compared to other creators out there. Right, Ms. Coffee Bean? 😅
This got me thinking, perhaps it is utterly ridiculous to use a single simple algorithm for all of training.
That puts the onus almost entirely on the model, which is likely one of the reasons why we create multi-billion parameter neural networks that are out performed by children.
Think about the diversity of things that motivate us, intimidate us, or otherwise influence our behavior. They're mostly emergent from our social structure, right? Or are we just trying to conserve entropy all the way through?
I'm not convinced either way, entropy might be the fundamental thing underlying intelligence for all we know.
I've seen something kin to this discussed in the context of AI-alignment and meso optimizers, which may be an interesting follow up topic.
I think "Tom" is not short for "Tomas" for Zahavy. you might want to change the title
Thank you so much for pointing this out. I feel sure that I must have seen it written down as "Tomas" somewhere, but I can't find it. I have changed it to "Tom".
Guys, I am stressed... Who is a Data Scientist, Please make an episode to clarify?
The one who..
1. Knows Keras/PyTorch APIs and ensemble with LightGBM for convergence
2. Got a good Kaggle Ranking and a TH-cam channel with 1000+ followers
3. Aware but never understood the probabilistic nature of everything, also quite unsettled.
3a. He knows the answer for questions like "What is a bijector", "When Covariance matrix should be positive semi-definite"
4. Wrote SQL queries until 2020, it is 2021: Naturally he is a Data Scientist
5. No, there is no such person called Data Scientist in the known universe