Ok genius iam also approaching the problem same way like you I don't use matheMatical way my question is so simple because LTI depends on convulution here's my question below Convulution is nothing but stacks and scales the input that's why the input to an amplifier is stacked and scaled or amplified but in filter design it attenuate frequency so I don't know how it regret certain frequency by stacking and scaling the input if possible some one explain to me
I love the videos you make. The good thing is that you explain the concept right to the point and don't waste time which means you are dominant on the subject. I really hope you don't lose the motivation to make such tutorials. because there are enthusiasts like me and my colleagues that literally waiting for your future videos. So please keep making videos
Is standard convolution here and depth-wise separable convolution functionally equivalent? That is, they will both give the same outputs for a certain input? It is just that, depth-wise separable convolution saves on computations, but is otherwise functionally the same right?
Is it correct that arbitrary standard convolition cannot be exposed as depthwise convolution (except some special cases)? Depthwise convolution is just another type of convolution, right?
Great! Neatly put. Thanks for the video. One thought -- we can add one parameter lambda as multiplication factor in the combination step, and treat as a trainable parameter which increases total trainable parameters by 1 but may help converge the solution faster, I guess. Depthwise sep conv = Depthwise conv + lambda * Pointwise conv.
In the Xception research paper they actually used skip connections and dense layers , skip connection were reported to have given a major boost to the final accuracy.
First, thank you for making this helpful video. Second, why can't comp sci people agree on one notation for anything at all?! It's like for every video I watch I gotta learn a new set of notations... BOY. And why is F the input and not the filters? that's just straight up confusing man. humans really can't agree on anything.
Thanks for explanation
love what you are doing, your recent videos were really helpful to me, keep up the good work, keep exploring and uploading videos 👍
Thanks! Glad you like the videos!
Ok genius iam also approaching the problem same way like you I don't use matheMatical way my question is so simple because LTI depends on convulution here's my question below
Convulution is nothing but stacks and scales the input that's why the input to an amplifier is stacked and scaled or amplified but in filter design it attenuate frequency so I don't know how it regret certain frequency by stacking and scaling the input if possible some one explain to me
I love the videos you make.
The good thing is that you explain the concept right to the point and don't waste time which means you are dominant on the subject.
I really hope you don't lose the motivation to make such tutorials. because there are enthusiasts like me and my colleagues that literally waiting for your future videos.
So please keep making videos
Well explained, beautifully demonstrated. Thanks!
Is standard convolution here and depth-wise separable convolution functionally equivalent? That is, they will both give the same outputs for a certain input? It is just that, depth-wise separable convolution saves on computations, but is otherwise functionally the same right?
Thank you for explanation, but please, use more intuitive designations (like H for height and W for width)
What would pointwise convolution look like in a 1d separeble convolution???
Well. I can't understand why the input size of the second phase is still M. Is that a typo?
Do you have a python code 3d depthwise separable convolution?
Can you make a video on Resnet Architecture for beginners?
Where to use depthwise separable convolution?
well explained , you made it look really easy !
very helpful video, thanks
Perfect explanation. I appreciate it. Thank you!
Okay, now I get it.... Thanks!
That put down so simply. Just loved it :) Thanks a lot
Abinash Ankit Raut glad you liked it! Thanks!
Hey really helpful Thank You. Can you also make a video on Winograd Convolution?
absolute banger! well done
Brilliant explanation, described in a very understandable way.
Shahriar Mohammad Shoyeb thanks! Glad you liked it !
Is it correct that arbitrary standard convolition cannot be exposed as depthwise convolution (except some special cases)? Depthwise convolution is just another type of convolution, right?
This is a wonderful tutorial which deserves (and in the future will get) way more views
Rahul Gore Hoping the same. Thanks!
2:00 shouldn't it be (Dk^3 ) * M? As matrix multiplication of size (n x m) . (m x p), no. of multiplication are n x m x p.
That was a very lucid explanation, thanks.
Glad you found it usefule Sangeet
Why are the output number of features always an integral multiple of the number of input channels?
It is so useful and clear
Simply Brilliant thank you for much for a detailed information about Xception
thank you, understood
Super explanation
thanks for your high-quality videos which really help me a lot
Finally understood MobileNets and DSCs. Thanks the for clear video!
Great help for understanding DepthWise Seqarable Convolution!!!
Thanks!
Thanks.
really helpful for me to understand the depthwise separable convolution! Thank you!
provide a clear understanding to me .so glad,thank you
Excellent video,easy and well explain of Depth wise Separable Convolution.Really grateful to you.
omg this came out 4 years ago? I am living under a rock
Many many thanks.
This video is real helpful. thank you
Thanks a lot for this! Very helpful.
easy to understand. i suggest to add animations for better understanding if possible. thanks
Very crisp explanation loved it.
Thanks
good
Imbeccable explanations as always!
Thank you so much for this amazing explanation!
You explained it in the best way.
Thank you for this. What are you using for animations?
this is fantastic explaination
Thank you so much!!
amazing content. thanks alot :)
Great explanation! Thank you very much!
Awesome video, subscribed! Had a really good understanding of what depth wise separable convolution is at the end of the video.
Your channel is underrated and is pure gold
Nice video. Thanks.
Great video, reading the reference paper is going to be much easier now
Amazing video sir.
Amazing Explanation!
excellent!very nice video
thank you, it was of great help !!
Great! Neatly put. Thanks for the video. One thought -- we can add one parameter lambda as multiplication factor in the combination step, and treat as a trainable parameter which increases total trainable parameters by 1 but may help converge the solution faster, I guess. Depthwise sep conv = Depthwise conv + lambda * Pointwise conv.
Where to use depthwise separable convolution?
How do we come to know to where to use it? 🤔
@@strongsyedaa7378 Wherever you want to reduce number of trainable parameters. Most of the networks are defined with this depthwise conv.
Nice video! I look forward to future videos on object detection and semantic segmentation.
Great video. Helped a lot!
Good Explanation! Thanks
Awesome explanation
This was great.
great video,looking forward more
Tjis is like mapreduce
Thank you so much for making such a nice video that is so easy to understand.
GUO GUANHUA For Sure! I'm glad you understood it :)
This is excellent
Hey, I am making a video using some of your animations. Hope its cool!? It's on MobileNets
bluesky314 Absolutely. Just list this video in your references. Send a link to your video here when you're done. I'd like to see it :)
Thanks! Here it is: th-cam.com/video/HD9FnjVwU8g/w-d-xo.html Would love your feedback
hey
worth the time!!
Very clear, make it easy to understand! Thanks!
Zhuotun Zhu anytime! Thanks for watching
Awesome video dude
Very clear explanation.. Thanks a lot.
Welcome! Glad you got some use out of it
CodeEmporium Yeah.. I was reading W-Net where they have used it..
very nice!
very clear!
excellent
Thanks!
Amazing .. explained so clearly !! Thank you
Harsha Vardhana anytime! Glad you liked it!
How does this do with Res and Densenets?
In the Xception research paper they actually used skip connections and dense layers , skip connection were reported to have given a major boost to the final accuracy.
Loved it!
Thank you. You saved me a lot of time.
It's what it do. Thanks for watching :)
Explained it so simply. Thanx
No worries. Glad it helps!
This video was very helpful, thank you :)
Welcome. Glad it was useful!
Best explanation I've found on this, thanks
As long as it helps :)
"immediately" hahaha. Thanks bro. SUbscribed
Awesome explanation . Loved it.
Mayank Chaurasia So glad you loved it :)
Good1
Very helpful Thank you
Thanks! Glad it was of use.
omg. you just saved the day!
You can always count on your friendly neighborhood data scientist..
can you do a video on Binarized Neural Networks?
very helpful, thanks
Glad it was helpful. Thanks for watching!
First, thank you for making this helpful video.
Second, why can't comp sci people agree on one notation for anything at all?! It's like for every video I watch I gotta learn a new set of notations... BOY. And why is F the input and not the filters? that's just straight up confusing man.
humans really can't agree on anything.
Sakkath video!