Have literally spent the last couple of days trying to understand few shot learning for a university project and haven't been understanding it at all until this video. Great explanation, thank you so much!
does this mean if you train the model using a particular support set, then in testing, if you add a new classes to the support set (never seen before), the model would still be able to find that a query belongs to the new class even though it was never trained using that new class in the support set?
Why don't the similarities add up to 1? Aren't the classes mutually exclusive? Or is it not about the classes being mutually exclusive, but the fact the sample and the input overlap like a drawing and its reference. So that the an input image can be like other images even if that's not the correct class.
It depends on how they are computed. If they are the outputs of Softmax, then they add up to 1. If they are computed by the Siamese network, then they don't.
Have literally spent the last couple of days trying to understand few shot learning for a university project and haven't been understanding it at all until this video. Great explanation, thank you so much!
Wonderful. The best explanation on Few-shot I've seen so far. Thank you!
Thanks. One of the few videos that explained this concept with near zero jargon.
Even a toddler can understand this. Thank you.
Best lecture about Few-shot learning! Thank you
Man it's difficult to tell between a beaver and an otter
Thank you! the presentation helped me to understand few-show learning.
You save my time to learn this concept. Thank You!
Best explanation of Meta Learnings
Best video on this concept! Please keep up the great work! Thank you!
Thanks for making it look like a piece of cake. I look forward to many more lectures from you.
Brilliant, intuitive explanation of few-shot learning! Thank you for uploading.
Extremely clear explanation. Thank you so much.
12:51 Just had to say that your support set image of the two hamsters aren’t hamsters. Those are guinea pigs.
Wonderful explaination
Thank you so much! Great explanation
Thanks for the best explanation ever. I really appreciate your effort.
Thank you for this video. It's awesome
Thanks for such a detailed explanation!
The best explanation ever
በጥሩ ሁኔታ አብራርተህልናል፣ በጣም እናመሰግናለን
Thank you so much for this amazing explanation
王老师,我从您这里学到了很多东西,非常感谢您,希望您以后能发布更多的学习视频
Awesome!!! A Great presentation, Thank you!
Hello, I found this video helpful. Could you maybe also upload the other parts? Thank you.
Thanks. Just uploaded the 2nd part. Will upload the 3rd in a day.
your lectures are very easy to understand. Keep it up👍
Thank you so much . That was extremely good explanation , please carry on .
Thanks a million. extremely good explanation and brilliant slides.
Thank you.
I like the explanation
Thanks for the clear explanation
such a good explanation, Thanks!
Wonderful explanation . Thankyou sir for this amazing content
thank you for your explanation..!
이해가 잘됩니다. 감사합니다.
Really good explanation, thank you!
Many thanks
Thank you so much.
does this mean if you train the model using a particular support set, then in testing, if you add a new classes to the support set (never seen before), the model would still be able to find that a query belongs to the new class even though it was never trained using that new class in the support set?
Thank you very much
Pretty good way of teaaching
Great presentation. Thank you professor Wang
Thanks a lot!
Why don't the similarities add up to 1? Aren't the classes mutually exclusive? Or is it not about the classes being mutually exclusive, but the fact the sample and the input overlap like a drawing and its reference. So that the an input image can be like other images even if that's not the correct class.
It depends on how they are computed. If they are the outputs of Softmax, then they add up to 1. If they are computed by the Siamese network, then they don't.
I think prepare a good support set is so challenging. is it true?
What was the criterion for selecting these images (support set)?
Really awesome sir..
Nice Work. Thanks
The animal in the water is an otter
Nice rhyme
Is it necessary that classes of support set should be in large data such imagenet data?
No, the classes of support set do not appear in the training set (e.g., Imagenet).
What about zero-shot learning
Thank you!
Amazing
Thank you!,
nice
Its 2024 please stop using a potato as a microphone
Thank you!