You saved me days of work! This video explains the process so well, I managed to finally apply an LSTM encoder-decoder on my own dataset by following your explanations. I was struggling with my code and this video saved me days of debugging. You are an incredible teacher, keep up the good work. I am looking forward to watching your feature videos (subscribed)
I like this video. It clearly explained the autoencoder-decoder LSTM module. As many people said that it is very difficult to go from theory to code, you help a lot with this problem, thank you.
Thank you so much for this. when I try these code. at very first step "!pip install -qq arff2pandas" , it showed ERROR: Could not find a version that satisfies the requirement arff2pandas (from versions: none) ERROR: No matching distribution found for arff2pandas" expecting your reply!!
Hey, great content. I've been reading the theory behind LSTM auto encoders (after implementing a vanilla autoencoder), and was having a hard time going from theory to code. This will help a lot. Subscribed.
It would be good to cite the actual publication of this method in the video description and in the blog post: "LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection". Malhotra et al, 2016.
sir our project is Real-Time Patient-Specific ECG Classification Using Machine Learning so please guide us the equipment and the methods that work on this
I couldnt install the arff2pandas... ERROR: Could not find a version that satisfies the requirement arff2python (from versions: none) ERROR: No matching distribution found for arff2python
Thank you for explaining your LSTM Autoencoder. I've tried implementing it using multivariate data (2 features). However, the model fails during the Encoder - Forward function. def forward(self, x): ... return hidden_n.reshape((self.n_features, self.embedding_dim)) It says the output is of shape (2, 128), and should be (128). Any idea's on how to incorporate multiple features in here?
Sir, I tried to download and load the pre-trained model but I am getting an error in 3rd line in below code. !gdown --id 1jEYx5wGsb7Ix8cZAw3l5p5pOwHs3_I9A model = torch.load('model.pth') model = model.to(device) And the error is "AttributeError: 'LSTM' object has no attribute 'proj_size'" How can I rectify this error? Please reply to my query as soon as possible
I think you are using the autoencoder incorrectly. You use it as a fully connected network and do not use the latent space of signs. You must connect a linear layer to encoder output to obtain an embedding for your goals.
I don't think you need a linear layer specifically. You can see that he reduces the dimensionality through a second LSTM layer inside of the encoder class.
You saved me days of work! This video explains the process so well, I managed to finally apply an LSTM encoder-decoder on my own dataset by following your explanations. I was struggling with my code and this video saved me days of debugging. You are an incredible teacher, keep up the good work. I am looking forward to watching your feature videos (subscribed)
DO YOU HAVE PROJECT ABOUT ECG signal classification and thanks
Thank you so much for the tutorial please do more about bio-signals because there aren't too stuff in the internet focusing on this...
BEST CHANNEL EVER, AMONG ALL HANDS-ON AI TOPICS. YOU COVER THE THEME GREAT! Venelin, one day I hope would you walk us through a manufacturing use-case
I like this video. It clearly explained the autoencoder-decoder LSTM module. As many people said that it is very difficult to go from theory to code, you help a lot with this problem, thank you.
Thank you so much for this. when I try these code. at very first step "!pip install -qq arff2pandas" , it showed ERROR: Could not find a version that satisfies the requirement arff2pandas (from versions: none)
ERROR: No matching distribution found for arff2pandas"
expecting your reply!!
Hey, great content. I've been reading the theory behind LSTM auto encoders (after implementing a vanilla autoencoder), and was having a hard time going from theory to code. This will help a lot. Subscribed.
Love from Korea :)
Thank you very much for the useful tutorial.
Thank you Venelin for your good work.
Thank you for watching ❤️
Why would anyone dislike this? Seriously, I am genuinely asking.
Great video. Helped me to develop model in my task. Thanks
Greetings and many thanks from Germany. :)
It would be good to cite the actual publication of this method in the video description and in the blog post: "LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection". Malhotra et al, 2016.
It's really good❤
When we needs to detect specified anomaly how can do it
Fantastic as usual maestro!
sir please guide me how to extract feature from ECG ........ which classifiers or methods are used plzz sir
Could you do a video with LSTM neural networks (PyTorch) with multi-variate time series and windowing? That will be amazing!!!
Could you give us examples of multivariate time series data. Doss biosignals kind of this data. Thanks
Very nice tutorial!
Thank you so much for this awesome video and the crystal clear explanation
if your going to append the train and test data then how are we going to test it?
Thanks so much for this! Really helpful tutorial with good explanations.
sir our project is Real-Time Patient-Specific ECG Classification Using Machine Learning so please guide us the equipment and the methods that work on this
Question why are we repeating x by (140, 1)?
I couldnt install the arff2pandas...
ERROR: Could not find a version that satisfies the requirement arff2python (from versions: none)
ERROR: No matching distribution found for arff2python
Thank you for your clear explanation. I tried to run the code, but the training took too long. I've got 15 epochs in 5 hours !! It's normal?
Thank you for explaining your LSTM Autoencoder. I've tried implementing it using multivariate data (2 features). However, the model fails during the Encoder - Forward function.
def forward(self, x):
...
return hidden_n.reshape((self.n_features, self.embedding_dim))
It says the output is of shape (2, 128), and should be (128). Any idea's on how to incorporate multiple features in here?
where is the dataset to understand the code?
is this removing artifact and noise??
Sir,
I tried to download and load the pre-trained model but I am getting an error in 3rd line in below code.
!gdown --id 1jEYx5wGsb7Ix8cZAw3l5p5pOwHs3_I9A
model = torch.load('model.pth')
model = model.to(device)
And the error is "AttributeError: 'LSTM' object has no attribute 'proj_size'"
How can I rectify this error? Please reply to my query as soon as possible
That is so cool, thanks for sharing this. I wonder if there is one for electromyography. +1!
It will be awesome EEG WOOOOOOOW
Are you for real? You are just amazing.
Thanks Venelin, its a great video. How Long did the training take for 150 epochs (its taking hours for me) any tips on how to spped it up?
*speed
Why using batch_first = True?
Is it possible to convert this into Pytorch Lightning?
yes
Good work
please make some amozing video advance ML based lectures related to Medical imaging like historical images, parkinson, other medical imaging datasets.
Can the source code from this video be seen somewhere?
Yes check the description of the video
Love from india
Love from india
Has anyone had issues opening the .arff file? I am not able to install !pip install -qq arff2pandas
Were you able to solve it. If yes please tell. Thank you
the training will take forever because the batch size is 1
I think you are using the autoencoder incorrectly.
You use it as a fully connected network and do not use the latent space of signs. You must connect a linear layer to encoder output to obtain an embedding for your goals.
I don't think you need a linear layer specifically. You can see that he reduces the dimensionality through a second LSTM layer inside of the encoder class.
Please sir please you are the hope for my project