I really want to appreciate and acknowledge the amount of effort you put into your videos. From great introductions to great discussions. Thanks for sharing this with the world.
What I absolutely love about Prof. Welling's appearance here is how he talks about projects he does with his students (like any Prof / supervisor), but also *names* the students, makes them *visible* . This is almost unique behaviour in a sea of supervisors giving keynotes and talks but almost hide the hard work of their students by saying "we did this", "one of my students tried that". But Max Welling sees the evident situation that a) he has tenure / stable income, influence in the field and everything, b) it is his students that still have to make themselves a name and place in the field. So he helps them, by highlighting who they are and how much it is *their* work too. Btw 0:51 is my absolute favourite moment from any MLST episode! 😂
Fantastic episode, this channel has become my favourite ML channel online, the content is deep, the style is refreshing, the mix of multiple minds debating back and forth in an open and respectful yet bold way is simply brilliant (and a key positive differentiating factor in my view); I am passionate about generative AI and it was great to learn about Max's views on the topic (and causality), and yes, a conversation between Max Welling and Karl Friston would be something; also super revealing how physicists are bringing their insightful perspectives onto the ML field, I have experienced this personally when interacting with some of them, another reminder of the importance of how domain experts in other areas can help shake things in the ML community (and physicists in particular, with their deep and vast body of knowledge, are in a great position to do this); the quantum stuff, with all the open questions attached, was intriguing, challenging and provocative; brilliant episode, I hope you keep doing this for a very long time! ;)
Great videos guys, very inspiring! I’m about to start a PhD in machine learning and it is very exciting to see Welling’s intuitions about the future of the field and the community. Cheers!
I remember attending a short symposium a couple of years ago where Professor Welling was a speaker. There were other eminent speakers like Geoff Hinton, Terry Sejnowski, Radford Neal, Illya Sutskever there as well. But I distinctly remember Prof Welling's lecture because he took two fields which I was largely unfamiliar with (theoretical physics and equivariant representations) and explained them in a manner I could largely grasp.
Wow! This is the first episode I have seen from Machine Learning Street Talk. It won't be the last! I nearly fell off my chair on hearing this discussion. At aged 70 I have forgotten much of the maths I studied earlier, but have tried keeping up using Jupyterlab. This talk is directly in line with my views, particularly on Lie Groups and manifolds. Looking forward to more of this. More from Prof Welling please.
Obviously great convo and thanks again! In particular I am really happy this time to get a review of how physics “basics” are being applied to ML, this was an amazing high level “report” in that regard. The way I see it, the last 2-3 centuries of physics research forged really great math raw alloy into sharp swords but were still training everyone how to swashbuckle. Meanwhile alloy mining and sword skunkworks continues apace.
Another fantastic episode - jam packed with thought-provoking ideas, great questions, and a really interesting guest. Thanks so much for taking the time to put this episode together and to share it.
54:07 The interesting thing about our visual system is that it probably doesn't have this rotational equivariance/invariance explicitly built-in. Try rotating the book in front of you while reading and you'll have a hard time reading it, right? So it's "obviously" not some perfect mathematical function that's built-in our brain architecture. That, however, is not an argument that we shouldn't do it like that. We can do better than evolution for some things that are of interest to us, and we demonstrated that with various tech advancements like the "airplanes don't fly like birds but are faster and that's what we care about" kind of argument. Absolutely loved the show! You guys nailed it. Hey Tim, how long does it take for you to edit this thing? The intro is crazy I can imagine the time that went into preparing this one.
Thank you for the great content. I was physicist for a long time mostly working on symmetries and changed to deep learning which is to my surprise is actually physics again 😊
I love physics x ML research. Proves we're edging closer to the simulation. Ps. great content guys. This channel keeps me motivated. Continue with your work!
When we learn about mathematics and physics at university and beyond, symmetries are looked for everywhere, even conservation laws are symmetries, yet in ML people seem hyped learning about this, I have an impression that people in ML research know precisely why these mathematical tools work and sell it with some veil of mysticism ("mathematics of general relativity and quantum field theory" wow really) to younger people with software engineering or computer science background. Just my impression.
I think quantum neural nets + room temp SQUIDS + fusion could set off the singularity. Apriori input shapes will be the difference between good and evil and we need to be careful.
"The reviewers are a bit too grumpy. If it's not a completely finished idea, they will find the hole and they start pushing on it." - almost every Yannik paper overview 🤣 To be fair, the rants are on point and the great ideas are uplifted.
Absolute delight as usual! As a side note, for anyone interested to know (in layman terms) how Gauge CNNs came to be and its possible impacts on the DL as well as (rather even more) on the Physics community, here is a wonderful article elucidating on the same: www.quantamagazine.org/an-idea-from-physics-helps-ai-see-in-higher-dimensions-20200109/
Anyone care to comment on the reasons for the invariance difference between identifiable world objects or their representation and the world of symbols aka text or numerics
Can anyone recommend a nice book/script on AI which also contains new developements of the last few years? It seems to be quite an interesting topic.... ;)
Interesting video. Max Welling will be keynote speaker at GSI'21 conference, co-organized with SCAI Sorbonne and ELLIS Paris Unit. GSI'21 will be dedicated to "Learning Geometric Structures" with session on "Geometric Deep Learning": www.gsi2021.org
You have to be at the point just before chaos to learn best. As Jordan Peterson would say you have to dip a toe into the unknown and then bring that knowledge back just like going and slaying your dragon. Quite interesting how this psychological idea is being proven algorithmically.
I really want to appreciate and acknowledge the amount of effort you put into your videos. From great introductions to great discussions. Thanks for sharing this with the world.
y e s
after I saw my first vid here, I was talking about the quality of it ALL weekend
What I absolutely love about Prof. Welling's appearance here is how he talks about projects he does with his students (like any Prof / supervisor), but also *names* the students, makes them *visible* . This is almost unique behaviour in a sea of supervisors giving keynotes and talks but almost hide the hard work of their students by saying "we did this", "one of my students tried that". But Max Welling sees the evident situation that a) he has tenure / stable income, influence in the field and everything, b) it is his students that still have to make themselves a name and place in the field. So he helps them, by highlighting who they are and how much it is *their* work too.
Btw 0:51 is my absolute favourite moment from any MLST episode! 😂
Fantastic episode, this channel has become my favourite ML channel online, the content is deep, the style is refreshing, the mix of multiple minds debating back and forth in an open and respectful yet bold way is simply brilliant (and a key positive differentiating factor in my view); I am passionate about generative AI and it was great to learn about Max's views on the topic (and causality), and yes, a conversation between Max Welling and Karl Friston would be something; also super revealing how physicists are bringing their insightful perspectives onto the ML field, I have experienced this personally when interacting with some of them, another reminder of the importance of how domain experts in other areas can help shake things in the ML community (and physicists in particular, with their deep and vast body of knowledge, are in a great position to do this); the quantum stuff, with all the open questions attached, was intriguing, challenging and provocative; brilliant episode, I hope you keep doing this for a very long time! ;)
Great videos guys, very inspiring! I’m about to start a PhD in machine learning and it is very exciting to see Welling’s intuitions about the future of the field and the community. Cheers!
glad that I found this channel: what a gem.
Listening to these talks really aid in motivation and intuition.
I remember attending a short symposium a couple of years ago where Professor Welling was a speaker. There were other eminent speakers like Geoff Hinton, Terry Sejnowski, Radford Neal, Illya Sutskever there as well. But I distinctly remember Prof Welling's lecture because he took two fields which I was largely unfamiliar with (theoretical physics and equivariant representations) and explained them in a manner I could largely grasp.
i like this! especially the fact that you don't interrupt the person being interviewed in the middle of a thought! keep going!
Wow! This is the first episode I have seen from Machine Learning Street Talk. It won't be the last!
I nearly fell off my chair on hearing this discussion. At aged 70 I have forgotten much of the maths I studied earlier, but have tried keeping up using Jupyterlab. This talk is directly in line with my views, particularly on Lie Groups and manifolds. Looking forward to more of this. More from Prof Welling please.
Definitely becoming one of my favorite AI/ML channels. Glad to see some more exposure with Max and co's work!
how have i not come across this podcast/channel until now. this is incredible content and quality!
Obviously great convo and thanks again! In particular I am really happy this time to get a review of how physics “basics” are being applied to ML, this was an amazing high level “report” in that regard.
The way I see it, the last 2-3 centuries of physics research forged really great math raw alloy into sharp swords but were still training everyone how to swashbuckle. Meanwhile alloy mining and sword skunkworks continues apace.
Wow, why did it take so long for TH-cam to recommend this channel.
haha asked myself the same question
Extraordinary couple of hours of listening. Amazingly well done, MLST team.
Another fantastic episode - jam packed with thought-provoking ideas, great questions, and a really interesting guest. Thanks so much for taking the time to put this episode together and to share it.
54:07 The interesting thing about our visual system is that it probably doesn't have this rotational equivariance/invariance explicitly built-in. Try rotating the book in front of you while reading and you'll have a hard time reading it, right?
So it's "obviously" not some perfect mathematical function that's built-in our brain architecture. That, however, is not an argument that we shouldn't do it like that. We can do better than evolution for some things that are of interest to us, and we demonstrated that with various tech advancements like the "airplanes don't fly like birds but are faster and that's what we care about" kind of argument.
Absolutely loved the show! You guys nailed it. Hey Tim, how long does it take for you to edit this thing? The intro is crazy I can imagine the time that went into preparing this one.
Thanks for commenting Aleksa! "how long does it take for you to edit this thing" You don't want to know 😂
@@MachineLearningStreetTalk hahah got it, I won't tell anybody. 😂 Anyways, keep it up it's great!
First! 😎🙌👌 So excited about this one, is it our best yet or what? 😃🎄😜
Source of the Alphafold2 PDF that Tim shows @8:53 - www.predictioncenter.org/casp14/doc/presentations/2020_12_01_TS_predictor_AlphaFold2.pdf
Thank you for the great content. I was physicist for a long time mostly working on symmetries and changed to deep learning which is to my surprise is actually physics again 😊
So many great ideas in this talk. I’m really glad I found this channel. Keep up the great work
That intro was the most exciting ML intro I've ever heard.
This guy makes me want to leave my faang company to work for his lab. In fact, I'm going to apply.
Definitely my favorite episode so far.
who does these amazing visualizations? congrats!
I feel like this direction in ML is the most productive currently.
I learn a lot from watching this interview. Thank you so much!
This conversation is awesome! Amazing questions and amazing answers. Thanks for creating this.
Love this, thanks for using the footage 🙌🏼
This channel is insane. I'm glad I just stumbled upon this.
So glad this was recommended. Cheers for this.
Your videos are so informative on topics that are very difficult to understand. Thanks
Great content!
My favorite episode so far - Invariance is all 🦾
Is there an uncut version of the interview with Max?
Yes, from 32:00
Amazing video 😍😍
I love physics x ML research. Proves we're edging closer to the simulation.
Ps. great content guys. This channel keeps me motivated. Continue with your work!
Please get Ilia on the show!
Thanks for this amazing content !
Was non parametric Bayes popular before deep learning? I didn’t know Bayes ever took off given computational complexity.
When we learn about mathematics and physics at university and beyond, symmetries are looked for everywhere, even conservation laws are symmetries, yet in ML people seem hyped learning about this, I have an impression that people in ML research know precisely why these mathematical tools work and sell it with some veil of mysticism ("mathematics of general relativity and quantum field theory" wow really) to younger people with software engineering or computer science background. Just my impression.
I think quantum neural nets + room temp SQUIDS + fusion could set off the singularity.
Apriori input shapes will be the difference between good and evil and we need to be careful.
Amazing one !!! when are you having Schimhuber
"The reviewers are a bit too grumpy. If it's not a completely finished idea, they will find the hole and they start pushing on it."
- almost every Yannik paper overview 🤣
To be fair, the rants are on point and the great ideas are uplifted.
Is it me or are Yannics intro clips generated with some sort of lip sync GAN?
Yannic recorded that clip from his phone! We stabilised it and removed background. It's defo in GAN territory now 😜
Absolute delight as usual! As a side note, for anyone interested to know (in layman terms) how Gauge CNNs came to be and its possible impacts on the DL as well as (rather even more) on the Physics community, here is a wonderful article elucidating on the same: www.quantamagazine.org/an-idea-from-physics-helps-ai-see-in-higher-dimensions-20200109/
Can you pls organize your bookshelf it's driving me insane
Anyone care to comment on the reasons for the invariance difference between identifiable world objects or their representation and the world of symbols aka text or numerics
Brilliant
Hahahah, thanx for QR code😂😂😂
Can anyone recommend a nice book/script on AI which also contains new developements of the last few years? It seems to be quite an interesting topic.... ;)
Interesting video. Max Welling will be keynote speaker at GSI'21 conference, co-organized with SCAI Sorbonne and ELLIS Paris Unit. GSI'21 will be dedicated to "Learning Geometric Structures" with session on "Geometric Deep Learning": www.gsi2021.org
never gonna give you up
Invite Stephen wolfram on here. He has some amazing ideas about computational irreducibility.
WHY ARE ALL THESE SMART PEOPLE SO BUFF!? D:
You have to be at the point just before chaos to learn best.
As Jordan Peterson would say you have to dip a toe into the unknown and then bring that knowledge back just like going and slaying your dragon.
Quite interesting how this psychological idea is being proven algorithmically.
whats up with the commenting inbetween just let me watch the talking Part.