LSTM in Keras | Understanding LSTM input and output shapes
ฝัง
- เผยแพร่เมื่อ 26 ม.ค. 2025
- complete playlist on Sentiment Analysis: • Sentiment Analysis wit...
Watch the complete course on Sentiment Analysis with Keras: www.skillshare...
Get the course on "Sentiment Analysis with Keras" Udemy @ $10:www.udemy.com/...
************* Best Books on Machine Learning : ***************
1. Introduction to Machine Learning with Python: A Guide for Data Scientists: amzn.to/2TLlhAR
2. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems: amzn.to/2wKtPij
3. Pattern Recognition and Machine Learning (Information Science and Statistics): amzn.to/33aNXpL
4. Deep Learning with Python - François Chollet: amzn.to/39ISNgv
5. Deep Learning (Adaptive Computation and Machine Learning series) - Ian Goodfellow: amzn.to/2vMPVR7
6. Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) - Kevin P. Murphy: amzn.to/33aNrYN
********************************************************************
lstm in keras,
lstm input and output shapes,
Embedding layer Keras,
word embedding,
#lstm #keras #sentimentClassification
after struggling for almost one year, i finally understood lstm clearly today :) thanks sir
Glad to hear that
At start, a voice that comes from beyond
Amazing...At 2:30, in line 3, are "units" same as "neurons" ?
At @10:40 you say, this is the output of the first word. Do you mean the first group of 10 words, the first sentence review?
Thank you for this video . This is one of the best explanations given on input and output shapes
Thanks.
After long time, my all doubt's cleared about Lstm practical things...thanks a lot
Glad to hear that
Thanks sir for the wonderful explanation.........
Welcome.
Hi, I'm getting this error ValueError: Input 0 of layer lstm_5 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 64, 13, 250).
my train_X.shape is (1130, 13, 250)
train_Y.shape is (1130, 59)
My code:
model = Sequential()
model.add(LSTM(64,return_sequences=False,input_shape=(64,13,250)))
model.add(Dropout(0.3))
model.add(LSTM(64,return_sequences=False))
model.add(Dense(59, activation='softmax'))
I'm not sure where I'm going wrong, could you please look at this once. Thanks in advance
when we get output of LSTM as 7x64 matrix, Is it get flattened before feeding to fully connected layer?
Do we need to flatten (7,64) before sending it to dense layer?Please help
Honestly By far the best video explaining input shape and output shape well done!! . freakin brilliant . Could u explain the other parameters/arguments used in lstms and dense layers please!! it would really help me alot!!
Thanks. Will try to create some videos on Keras in April.
Brilliant! Looking forward to the next video.
great explanation, with plug-and-chug approach using the library, I don't really understand what it is doing, it has been blackbox to me all the time until I found your video
Glad it helped!
Units parameter was so confusing. Thanks for making it clear.
Glad it helped!
Thank you so much sir! you gave best and simple explaination which made me understand about embedding layer,lstm layer and dense layer in the coding.
Glad it helped.
Just the video that's needed. Thanks
Glad it helped
Brilliant. Thanks
Thanks.
At the time stamp 5:34, Why you are keeping the batch size (3, 80, 100), shouldn't it be (3, 100, 80) ? BTW Thanks for the explanation !
it is Keras convention, maybe!
Still don't understand why output dimension is 64, if LSTM cell output only hidden state which is only one value, not 64
Thank you sir for the helpful video. I have a doubt however: so, does the predict() function in Python give the output of the LSTM layer ? I do not understand why this function should give the LSTM output...
Awesome video! thank you so much. If you write model.add(Embedding (1000, 500, input_length =X.shape[1])), Is the number of neurons in the embedded layer 500? or is it 1000? Also is the embedded layer the same as the input layer? thanks so much !
Thanks a lot for this wonderful video! I have a question regarding the size of input- In my data, I have a time-series of size 8x12000 (where 8 is number of features and 12000 is the time samples). Also I have corresponding labels or targets of size 1x12000. I would like to predict the labels for a new test time-series. So, how should I reshape this, since it also has a temporal dependency; is it okay to split it into 20 samples (for example) each, such that it becomes 600 observations each of size 8x20? What would you suggest? Thanks in advance!
why 100 embeddings ? why 64 outputs ? just adhoc ?
thank you for this informative video. but i have an issue, when fit my lstm model the input_shape is divided by the batch_size. i am not able to find out the solution for that. i am glad if you help to find the solution.
Check the input shapes in the plot_model() diagram. What is the insput shape shown there?
Input shape of LSTM is a 3D tensor with shape (batch_size, time-steps, input_dimension or sequence_length).
from tensorflow.keras.utils import plot_model
plot_model(model, show_shapes=True, to_file='model.png')
Awesome.. it helped
Glad to hear that
Beautifully explained.
I've a question how to reshape our data having shape 11800,400 for X training and Y train is 11800,2. The purpose is to convert the data into 3 dimensions to feed into TimeDistribution layer and than LSTM. Model is CNN + TIMEDISTRIBUTION + LSTM.
Great video for the detailed explanation. Thanks.
Thanks.
One more great video, as always...Thank you so much.
Thanks.
Hello, Thank you for your video and I have one question:
When 1 of 7 sentences goes into LSTM, does it read first column, then the second column, until it reaches 10th column? Then it repeats the process with 2 of 7 sentences until it reads all 7 sentences?
yes it goes word by word of a sentence.
Nicely explained, Thank You!
Glad it was helpful!
this cleared things up for me. Thanks!
Glad to hear it!
At last someone explained the input and output shapes, batch_size, sequence_length and input_dim arguments of LSTM CELL. It is not explained in documentation and it took me 2 days until I watched this video to understand what these dimensions and shapes mean.
Can you please make a series on how to build Simple Rnns and Lstms from scratch. It will be very helpful for beginner, I agree with what you said , keras documentation is poorly documented.Luckily people like you , ahlad kumar and others help us understanding the practical aspects.
Excellent video sir, and excellent explanation. If you ever do a video on TimeDistributedDense layer or RepeatVector Layer please let me know. Thank you very much sir
Thank you for this clear explanation
Glad it was helpful!
thank you so much
Glad it helped.
very nice video. thank you. Can you please do this for some other type data like stock forecasting, temperature prediction, etc. It will be really helpful.
no videos on this topic. thanks
You're welcome
very useful
Glad to hear that
fkin brilliant
Thanks.
rebiew
thank you for this informative video. but i have an issue, when fit my lstm model the input_shape is divided by the batch_size. i am not able to find out the solution for that. i am glad if you help to find the solution.
Check the input shapes in the plot_model() diagram. What is the insput shape shown there?
Input shape of LSTM is a 3D tensor with shape (batch_size, time-steps, input_dimension or sequence_length).
from tensorflow.keras.utils import plot_model
plot_model(model, show_shapes=True, to_file='model.png')