You create videos that are worth a thousand others, and I understand that it takes time. Your content is so polished, so please don't rush to complete this series.
Loved it man. I studied with Stanford ML professional but this video worth no money. If you are Hindi speaking individual then you even don’t need to encode and grab the concept direct. ❤
Bruh, this is something else, first time someone is teaching by taking reference from a research paper. Keep giving us the content, this is just worth millions dollar content not just thousands. Good Work, Bro. Mad Love to you
Finally, thanks a lot sir for uploading Encoder-decoder architecture, I was really finding a source to learn, then your video came and I bet its better than all the courses out there. YOUR WAY OF TEACHING IS JUST AMAZING ❤
Sir I really want to thank you from the bottom of my heart, I got my first Job Few days they as Machine Learning Engineer, Thank you sir I have been following your channel from last 6 months and that changed my life and career as well. ❤🙏.
Thank you so very much for posting the lecture. I am eagerly waiting everyday for NLP videos. I check the playlist almost everyday. Please give the playlist more time and complete the content.. U are one fine Data Science Tutor I have come across on youtube. 🎉
I now understand why you take time with uploading your videos. The quality like you always prefer, it is never compromised. I understood everything so thoroughly. Grateful for your content. Please don't stop this playlist and as always I would like to request you to drop NLP playlist sooner as you mentioned recently.
Hello Nitish.. been following your channel since last year and cant thank you enough for the content you are providing! I have a small request though ,please upload videoes on transformers and Bert and LLMs and Gen AI as it is very very crucial for the interviews... Tried other youtube videoes also but they are not a bit of your level. Thanks in advance.
Sir, just viewed the whole video!, it was amazing, you taught it in such an easy way and amazing way, that when I started to read the paper, it seems soo easy to read and literally was so damn understandable. Thanks a lot sir and appreciate your help! 🙏 ♥
Bru you've explained these topics better than some renowned ai researchers like Andrew Ng etc. I would recommend to please do a video on attention mechanism and transformers.
Thank you sir! We love you from the bottom of our hearts ❤ from Pakistan. This video was just amazing and the concept was pretty clearly explained. Now we are requesting you to make a video on transformers.
Sir, Could you please prioritize creating daily videos, especially covering topics like BERT, Transformer, and LLM? The urgency of ongoing placement activities, Your expertise is invaluable, and we believe concise, regular video updates would be both attractive and beneficial. Thank you for your prompt attention to this matter.
Great video sir, I felt very easy to understand since I have listened to all the previous videos in this playlist. Thank you so much!, To learn data science in the right way one should come to CAMPUSX.
Sir, please try to make videos on "how to fine Tune OpenSource LLM for building Chatbots." I will share that video alot to my friends beacuse we are working in chatbot domains. please please
Sir, I have a doubt, During the training, what if the decoder outputs an output sequence longer than the true output sequence. How is the loss calculated in such cases. will categorical cross entropy work in such cases.
I have a doubt @Nitish sir. after producing the context vector what if it's size is not same as token of decoder at time step 1. I remember that LSTM should have its input length and both hidden and cell state lengths as same. the example you used (think about it). it's context vector will be of 5 dimensions but token will be of 7 dimensional vector. Doesn't it make any issue with LSTM working inside decoder. please help me out with his question. thanks in advance.
Agar end nai aayega to hum during the training phase hm correct wala input de denge agar hm dekhe to agar hmre last wale decoder ka output agar galat hai to hm force teaching ka use karke sahi wala input vector denge uske ka use karenge
I have a doubt. In the decoder part we are passing Ct and Ht of the current time step to the next time step, along with which we are passing the output from the current time step, which is also Ht. So does it mean, we will be passing Ht twice to the next time step in the decoder? Kindly clarify
20:27 sir cell state lstm me upr wali memory hoti hai na aur hidden neche wali but diagram me ulta hai is se confusion ho rhi hai Kindly is ka btae same baat hai ?
You create videos that are worth a thousand others, and I understand that it takes time. Your content is so polished, so please don't rush to complete this series.
Loved it man. I studied with Stanford ML professional but this video worth no money. If you are Hindi speaking individual then you even don’t need to encode and grab the concept direct. ❤
Bruh, this is something else, first time someone is teaching by taking reference from a research paper. Keep giving us the content, this is just worth millions dollar content not just thousands. Good Work, Bro. Mad Love to you
I doubt if there's any video on youtube providing as much value as you do in each video of yours. Thank you and you have my respect Nitish Sir!
lot of Love and Respect from Pakistan❤❤❤
"This is the only TH-cam channel where I click the like button first and then watch the video because, I know video to bawaal honevala hi hai" 😄
Finally, thanks a lot sir for uploading Encoder-decoder architecture, I was really finding a source to learn, then your video came and I bet its better than all the courses out there. YOUR WAY OF TEACHING IS JUST AMAZING ❤
Sir I really want to thank you from the bottom of my heart, I got my first Job Few days they as Machine Learning Engineer, Thank you sir I have been following your channel from last 6 months and that changed my life and career as well. ❤🙏.
hows the demand in the marker?
Which company did u join in and package?
plzz tell...
you have made me really feel what machine learning and deep learning is ..
The best teacher.. great person.. thank you guruji
Exitment for this playlist is increasing day by day. I Eagerly waits for each videos..❤..thanx sr...
Thank you for sharing knowledge. You deserve the best teacher award.
Thank you so very much for posting the lecture. I am eagerly waiting everyday for NLP videos. I check the playlist almost everyday. Please give the playlist more time and complete the content..
U are one fine Data Science Tutor I have come across on youtube. 🎉
I wil like the video first and then watch the video. 😀 Coz i know the playlist is wonderful and one of the best 🙂
Very amazing sir , literally just amazing and easy to understand . We are very lucky sir, seriously, to have you. Thank you sir.
I now understand why you take time with uploading your videos. The quality like you always prefer, it is never compromised. I understood everything so thoroughly. Grateful for your content. Please don't stop this playlist and as always I would like to request you to drop NLP playlist sooner as you mentioned recently.
Thanks alot Sir, Ive learned more from your channel than so called best selling courses .Love From Pakistan
Hello Nitish.. been following your channel since last year and cant thank you enough for the content you are providing! I have a small request though ,please upload videoes on transformers and Bert and LLMs and Gen AI as it is very very crucial for the interviews... Tried other youtube videoes also but they are not a bit of your level. Thanks in advance.
THank you for such crystal clear explanation videos. 🙏🙏🙏🙏🙏🙏
Sir, just viewed the whole video!, it was amazing, you taught it in such an easy way and amazing way, that when I started to read the paper, it seems soo easy to read and literally was so damn understandable. Thanks a lot sir and appreciate your help! 🙏 ♥
Bru you've explained these topics better than some renowned ai researchers like Andrew Ng etc.
I would recommend to please do a video on attention mechanism and transformers.
Thank you sir!
We love you from the bottom of our hearts ❤ from Pakistan.
This video was just amazing and the concept was pretty clearly explained.
Now we are requesting you to make a video on transformers.
Awesome. You put such great efforts in your videos, it really reflects there.
Thank you for such wonderful and in-depth explanation.
Could you make a video on autoencoders? Your clear explanations would make it easy for us to understand this intriguing topic.❤
hats off to ur efforts....
stay blessed !!!
this video is amazing -> यह वीडियो अद्भुत है
Best Explanation Ever. Eagerly waiting for Transformer Architecture Video.Keep it up @Nitesh bro
very nice explanation, I have being following you since couple of months. Salute to your content Nitish Bhai....
Your videos are best ❤. You don't leave a single doubt. I have been waiting for your next video
Sir, Could you please prioritize creating daily videos, especially covering topics like BERT, Transformer, and LLM? The urgency of ongoing placement activities, Your expertise is invaluable, and we believe concise, regular video updates would be both attractive and beneficial. Thank you for your prompt attention to this matter.
Thank you nitish sir ❤️for putting value to millions life
Great video sir, I felt very easy to understand since I have listened to all the previous videos in this playlist.
Thank you so much!, To learn data science in the right way one should come to CAMPUSX.
Yes we required that video
It was fabulous. can you create a video for implementation?
Thank you so much very helpful
Sir, please try to make videos on "how to fine Tune OpenSource LLM for building Chatbots."
I will share that video alot to my friends beacuse we are working in chatbot domains.
please please
I feel fine tuning those open source llms are way too computationally complex and complicated.
Gold explanation by a Gold mentor
Sir, I have a doubt,
During the training, what if the decoder outputs an output sequence longer than the true output sequence. How is the loss calculated in such cases. will categorical cross entropy work in such cases.
Thanks a ton sir! Have no words to thanks you.
Thank you so much for your efforts to make this video❤, I really like the way you explain the concept.
Amazing explanation !!!👍👍
Thank You Sir.
Very Nice Video Sir. Thanks for it.
Amazing explanation
Bhai you are doing God's work
Can you explain beam search in encoder-decoder architecture
Really amazing sir please make a paid course for data science where you can teach us everyday from scratch
I have a doubt @Nitish sir.
after producing the context vector what if it's size is not same as token of decoder at time step 1. I remember that LSTM should have its input length and both hidden and cell state lengths as same.
the example you used (think about it). it's context vector will be of 5 dimensions but token will be of 7 dimensional vector. Doesn't it make any issue with LSTM working inside decoder. please help me out with his question.
thanks in advance.
to solve that use only we use context vector IG - to solve the vector size problem
great effort sir
bht ziyada w88 krna ka bad at the end thanks
Great Video Sir can you speed up frequency of uploading Deep learning Videos
Thank you so much sir 🙏🏻🙏🏻
Awesome content sir 🔥🔥
waiting for next video
Do we have any coding example notebook of the same
40:20 prediction mai nahi aaya to kya hoga? Then next time step mai lstm ko kya input denge?
Agar end nai aayega to hum during the training phase hm correct wala input de denge agar hm dekhe to agar hmre last wale decoder ka output agar galat hai to hm force teaching ka use karke sahi wala input vector denge uske ka use karenge
Thanks a ton,,,and i don't think we need to read this paper after watching this video😀😃
Thank you so much for creating this kind of contents
sir ek video object detection pr bhi layiye na...
Time lagega usme
I have a doubt. In the decoder part we are passing Ct and Ht of the current time step to the next time step, along with which we are passing the output from the current time step, which is also Ht. So does it mean, we will be passing Ht twice to the next time step in the decoder? Kindly clarify
Good explanation 👍👍
waiting for next video
We don’t use OHE so please take word2vec embedding as example always
Thanks sir
Great explanation❤️
What will be the dimension of softmax layer when we use embeddings coz the dimendion got reduced
THE BEST..♥♥
20:27 sir cell state lstm me upr wali memory hoti hai na aur hidden neche wali but diagram me ulta hai is se confusion ho rhi hai
Kindly is ka btae same baat hai ?
sir you are just amazing . ❤❤❤
India's Andrew Ng-The Teacher
I think better than him
Great but can u create a working model using python to show this exact process?
Marvelous sir
Sir, please provide an implementation of Machine Translation
He provides better explanation than Andrew NG 🥳
What a video, sir.amaziiiiing
really thanks for this video ..
this is helpful 🖤🤗
Upload Next Video SIr
SIR CAN YOU MAKE SOME VIDEOS FOR VAE(VARIATIONAL AUTO ENCODERS)
Waiting for the sequel video !
Liked ,Liked and liked!!!!
Thank you sir , please complete the nlp playlist asap 🙏✨
when is next seq2seq video coming??
is there any link for seq2seq coding?
Does we remove the start token from the ground truth before computing the loss with the predicted caption?
can anyone help
thanks
👏
badshaho, thank you!!!!
sir please upload lecture notes so that we can do revision
At last! Thanks a lot
Great
Thankxs alot
Dear sir,
requesting if you could add few videos related to 1 stage and 2 stage object detections.
The 100 days is not completed sir. PLease upload more videos on DL.
Sir , humble request kindly complete the Python Playlist .
Thankusomuch sir❤
Dsmp 2.0 any option for course(emi)or monthly paid please reply
Wow ❤❤
We need english subtitles 🥲
👏👏
Nitish you can make more shorts