6:42 why did you say, we are considering a sigmoid activation ? what would be different if I used another activation function such as RELU? Would the kl term change?Do we apply this on all layers or only the last layer of the encoder ?
You have great understanding in this particular domain but the way you get million of subscribers is that you should explain in deep that why there is only 1 bottle neck , why not 2 . How encoder compresses what is the working behind it , each and everything and explain it on student level so that it may understands very easily . But I appreciate your stuff . Keep it up Man !
i have a question so the vanilla autoencoder is completely useless we should stay away from it and instead we use regularized autoencoder as the default autoencoder?
n_h in 6:43 is number of hidden layers pr number of neurons in hidden layer. Also, are we considering only 1 hidden layer? A very nice explanation though! I am preparing for an interview and this is like a gold treasure for me!
This is awesome explanation 🙌. About to consider please don't show the subscribe button in the every 2 min it is very confusing. every thing except this look good 👍 . thnks
9:37 can't a CNN do the same job? What's the difference between using CNN and autoencoders with convolutional layers? Or are they actually the same thing? I'm new to both types of networks, so would appreciate any elaboration. Thanks.
We would still want the encoder and decoder to learn as much from the input as possible. A shallow network allows many different features to be learned, thus increasing the odds that the network has learned a good amount from the input image.
This is the most underrated channel on youtube.
Why does this guy not have a million subscribers
I ask myself the same question every day
@@CodeEmporium you dont need though
@@CodeEmporium there are not too many ML engineers :)
Quality of the followers over quantity ;)
Man your voice is so clean and pleasant
Thanks for loving my voice
6:42 why did you say, we are considering a sigmoid activation ? what would be different if I used another activation function such as RELU?
Would the kl term change?Do we apply this on all layers or only the last layer of the encoder ?
You have great understanding in this particular domain but the way you get million of subscribers is that you should explain in deep that why there is only 1 bottle neck , why not 2 . How encoder compresses what is the working behind it , each and everything and explain it on student level so that it may understands very easily . But I appreciate your stuff . Keep it up Man !
i have a question so the vanilla autoencoder is completely useless we should stay away from it and instead we use regularized autoencoder as the default autoencoder?
n_h in 6:43 is number of hidden layers pr number of neurons in hidden layer. Also, are we considering only 1 hidden layer?
A very nice explanation though! I am preparing for an interview and this is like a gold treasure for me!
This is awesome explanation 🙌.
About to consider please don't show the subscribe button in the every 2 min it is very confusing. every thing except this look good 👍 . thnks
9:37 can't a CNN do the same job? What's the difference between using CNN and autoencoders with convolutional layers? Or are they actually the same thing? I'm new to both types of networks, so would appreciate any elaboration. Thanks.
With a CNN you need labelled data
What a great work! Thanks for videos. btw, can you make any videos about Conv-decon network? how it's different with Auto-encoders?
Great explanation!! Thank you!
Autoencoders and encoder-decoder are equal?
Why do we need the encoder and decoder to be a shallow network?
We would still want the encoder and decoder to learn as much from the input as possible. A shallow network allows many different features to be learned, thus increasing the odds that the network has learned a good amount from the input image.
would love videos on NLP! :)
Coming soon! ;)
@@CodeEmporium still waiting for NLP videos
Can you make a video for the beginners, this stuff with formulas and equation gets too complicated.
Great video...
Please change the comic san font
Very cool
01:22 .. data around us like images and donkeynets
That intro though!!!