Got a question on the topic? Please share it in the comment section below and our experts will answer it for you. For Tensorflow Training and Certification, Call us at US: +18336900808 (Toll Free) or India: +918861301699 Or, write back to us at sales@edureka.co
awesome tutorial I have watched many videos and read the article but I didn't have a clear understanding, after watching this now I definitely have deep intuition on autoencoder. keep making such good tutorial ...
Very good explanation, but I wanted to know 1) how an autoencoder can be used for network anomaly detection and 2)how to combine LSTM cell within autoencoders. Thanks.
1. The autoencoder architecture essentially learns an “identity” function. It will take the input data, create a compressed representation of the core / primary driving features of that data and then learn to reconstruct it again. 2.You will need to unzip them and combine them into a single data.
Nice video for beginners. I am new to the topic and want to understand how can convolutional autoencoders generate high-resolution image because the outputs of autoencoders are usually lossy/compressed. Thanks.
Hi ! Great video! I had a doubt in understanding the working of the regularizer term in the loss function of autoencoder neural networks. Regularization in general means to penalise the loss function so that the network does not over fit the training set and generalises well to unseen data. What does the term "call back labeller divergence" mean here? And please explain the meaning of the sentence : "If regularizer term was not included, encoder could learn to cheat map each data point a representation in a different region of Euclidean space" ??
Can you help me in getting a spectral dataset to use it for autoencoders and other non linear methods ,,,,, Also if possible some papers for vanilla autoencoders.. Also i have a question like PCA method can be used for spectral datasets as well ?
PCA is used to remove redundant spectral information from multi band datasets. So, you should Create a smaller dataset from multiple bands, while retaining as much original spectral information as possible.
Hi,well explained...Just have an query ...Can we use autoencoders for building recommendation system ? What will be output of it ..Like i have studies dat we can use it for dimentionality reduction ..But i am not able to understand the output of it ..Like we will apply movie dataset of 10k to autoencoder..Wat output it will produce ....Thnx in advance ..Plz help its urgent
Thanks, Sonam! It is mostly used in places where you have scope for less dimension value. for example, you want the output as an image but in a reduced dimension, autoencoders come handy. It doesn't reduce the size of your dataset. But reduces the dimension values for the output.
Good to know our contents and videos are helping you learn better . We are glad to have you with us ! Please share your mail id to send the data sheets to help you learn better :) Do subscribe the channel for more updates : ) Hit the bell icon to never miss an update from our channel : )
Autoencoders are trained using both encoder and decoder section, but after training then only the encoder is used, and the decoder is trashed. So, if you want to obtain the dimensionality reduction you have to set the layer between encoder and decoder of a dimension lower than the input's one.
Everywhere in auto encoders the input and output are same.But in the IEEE transaction paper which I am working the input and output of stack auto encoder are different and it is said in paper that they are using SAE for mapping input and output.can you explain how to SAE can be used for mapping
Hey Racharla, A stacked autoencoder is a neural network consisting of multiple layers of sparse autoencoders in which the outputs of each layer is wired to the inputs of the successive layer.The features from the stacked autoencoder can be used for classification problems by feeding a(n) to a softmax classifier. Hope this helps!
Glad you liked it ! We are glad to have learners like you. Please do share your mail id so that we can send the notes or source codes. Do subscribe our channel and hit that bell icon to never miss any video from our channel
We are happy that Edureka is helping you learn better ! We are happy to have learners like you :) Please share your mail id to share the data sheets :) Do subscribe the channel for more updates : ) Hit the bell icon to never miss an update from our channel : )
Got a question on the topic? Please share it in the comment section below and our experts will answer it for you.
For Tensorflow Training and Certification, Call us at US: +18336900808 (Toll Free) or India: +918861301699
Or, write back to us at sales@edureka.co
awesome tutorial I have watched many videos and read the article but I didn't have a clear understanding, after watching this now I definitely have deep intuition on autoencoder. keep making such good tutorial ...
Very good explanation, but I wanted to know 1) how an autoencoder can be used for network anomaly detection and 2)how to combine LSTM cell within autoencoders.
Thanks.
1. The autoencoder architecture essentially learns an “identity” function. It will take the input data, create a compressed representation of the core / primary driving features of that data and then learn to reconstruct it again.
2.You will need to unzip them and combine them into a single data.
Good Explanation. Quick question on sparse autoencoders. Is the KL divergence penalization similar to an L2 regularization?
No, these two are different things.
Nice video for beginners. I am new to the topic and want to understand how can convolutional autoencoders generate high-resolution image because the outputs of autoencoders are usually lossy/compressed. Thanks.
Hey Sidharth, This doc might be of help to you. arxiv.org/pdf/1606.08921.pdf.
Well explained
Thank You 😊 Glad it was helpful!!
Really Loved the explanation provided ....!!!
Thank you Edureka
thank you for another beautiful session .
Great Explanation!
Hi ! Great video! I had a doubt in understanding the working of the regularizer term in the loss function of autoencoder neural networks. Regularization in general means to penalise the loss function so that the network does not over fit the training set and generalises well to unseen data.
What does the term "call back labeller divergence" mean here? And please explain the meaning of the sentence : "If regularizer term was not included, encoder could learn to cheat map each data point a representation in a different region of Euclidean space" ??
This callback will write logs to /tmp/autoencoder
Valuable
Great😍😍
Can we use Auto encoders to reconstruct 3d model from 2d single Image ?
very nice!!
thanks for this nice video
I like to all of yours datails explaintion about all topics.
great explain!
Hey Harold, we are glad you loved the video. Do subscribe and hit the bell icon to never miss an update from us in the future. Cheers!
Can you help me in getting a spectral dataset to use it for autoencoders and other non linear methods ,,,,, Also if possible some papers for vanilla autoencoders.. Also i have a question like PCA method can be used for spectral datasets as well ?
PCA is used to remove redundant spectral information from multi band datasets. So, you should Create a smaller dataset from multiple bands, while retaining as much original spectral information as possible.
Hi,well explained...Just have an query ...Can we use autoencoders for building recommendation system ? What will be output of it ..Like i have studies dat we can use it for dimentionality reduction ..But i am not able to understand the output of it ..Like we will apply movie dataset of 10k to autoencoder..Wat output it will produce ....Thnx in advance ..Plz help its urgent
Thanks, Sonam! It is mostly used in places where you have scope for less dimension value. for example, you want the output as an image but in a reduced dimension, autoencoders come handy. It doesn't reduce the size of your dataset. But reduces the dimension values for the output.
hello ,
Beautiful session
can you please provide me the autoencoder deep learning code ?
Good to know our contents and videos are helping you learn better . We are glad to have you with us ! Please share your mail id to send the data sheets to help you learn better :) Do subscribe the channel for more updates : ) Hit the bell icon to never miss an update from our channel : )
how does dimensionality reduction take place through encoder?
Autoencoders are trained using both encoder and decoder section, but after training then only the encoder is used, and the decoder is trashed. So, if you want to obtain the dimensionality reduction you have to set the layer between encoder and decoder of a dimension lower than the input's one.
Also which autoencoder was used in the demo code of the video...
We have used the simple Autoencoder and sparse for most of the examples
Could you send the sample code for autoencoder for each type.
try this towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798
Please can i get the code used in the demo?
Sir please upload tutorial about text to speech converter app with own voice practically...
You can check this: th-cam.com/video/Gc7X-wLv-qU/w-d-xo.html
how we can use autoencoder for text summarization?
Autoencoder is a supervisor of the sequence-to-sequence model, to learn a better internal representation for abstraction summarization.
Everywhere in auto encoders the input and output are same.But in the IEEE transaction paper which I am working the input and output of stack auto encoder are different and it is said in paper that they are using SAE for mapping input and output.can you explain how to SAE can be used for mapping
Hey Racharla, A stacked autoencoder is a neural network consisting of multiple layers of sparse autoencoders in which the outputs of each layer is wired to the inputs of the successive layer.The features from the stacked autoencoder can be used for classification problems by feeding a(n) to a softmax classifier. Hope this helps!
thanks for the reply @@edurekaIN
can autoencoders use for signal processing?
denoising-autoencoders can be used for signal-processing.
Hey, great explanation!
If only I could run the code for myself and see its working..
Can you share the source code?
try this towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798
can I get sildes please for this video and code
Glad you liked it ! We are glad to have learners like you. Please do share your mail id so that we can send the notes or source codes. Do subscribe our channel and hit that bell icon to never miss any video from our channel
please upload the code also
Mam,can you please provide me the autoencoder deep learning code ?
We are happy that Edureka is helping you learn better ! We are happy to have learners like you :) Please share your mail id to share the data sheets :) Do subscribe the channel for more updates : ) Hit the bell icon to never miss an update from our channel : )
Can I get source code for the same?
Thanks for showing interest in Edureka! Kindly share your mail id for us to share the datasheet/ source code :) Do subscribe for more videos & updates
Can you plz share the code?
Can i get the source code for this .
Please mention your email id (it will not be published). We will forward the code to your email address.