I came to the comments to see if I was the only one struggling with this explanation. This was a very difficult video to follow, anyone have a better way of summing this up?
I think Prof. Ghassemi Lectures and Tutorials gives a proper explanation...i watched his lecture 6 playlist and found it helpful...hope it helps you too
Nothing understood. What is happening ? A simple transformer block itself contains Encoder and Decoder. Why there is explanation with RNN ? Last it says, RNNs are replaced by transformers. Makes me more confused !
RNN replaced by transformers which works in attention mechanism concept if u want to know more about transformers follow the learning path suggested in video
I think this video omits too many details to the point where I don't see what was the point of recording this? The video doesn't even show how data is embedded into the encoder and how the vector is produced
The goal of an encoder decoder architecture is to "compress" high dimensional information be it an image or a natural language sentence into a dense lower dimensional representation. Then, in order to minimize the loss generated during learning, the encoder is forced to learn to represent the input data in a lower dimensional space without losing so much information that it makes it impossible for the decoder to recover the input. This could essentially be seen as the encoder learning to "summarize" its input. Again the encoder learns to encode natural language into vector space representations in the same way the decoder learns to decode it. Through training and backpropagation. Showing this process in any meaningful way is difficult because it happens in a deep neural network consisting of RNN blocks. If you dont know what RNNs are i recommend learning about that architecture first.
Subscribe to Google Cloud Tech → goo.gle/GoogleCloudTech
You need to use encoder decoder to understand this course
RTS TEXTS AND FEEDS INSTEAD OF SMS
/FEEL/
True 😭😭🤣🤣
I came to the comments to see if I was the only one struggling with this explanation. This was a very difficult video to follow, anyone have a better way of summing this up?
i am also looking for a proper course.
I think Prof. Ghassemi Lectures and Tutorials gives a proper explanation...i watched his lecture 6 playlist and found it helpful...hope it helps you too
Anyone has link to an easier explanation? I could not decode this lesson.
th-cam.com/video/L8HKweZIOmg/w-d-xo.html
Try Machine Learning AI on your own created audio
@@AshkaCambell You didn't have to display your stupidity in full view.
I think there's definitely a simpler way to explain how this architecture works, this one is too hard to understand tbh
What you suggest
Nothing understood. What is happening ? A simple transformer block itself contains Encoder and Decoder. Why there is explanation with RNN ? Last it says, RNNs are replaced by transformers. Makes me more confused !
RNN replaced by transformers which works in attention mechanism concept if u want to know more about transformers follow the learning path suggested in video
"shifted to the left"...? 5:01
Microergonomics
Amazing Thank you
Dam , sounds like chamber in valorant game
Tres Bien Monsieur
I think this video omits too many details to the point where I don't see what was the point of recording this? The video doesn't even show how data is embedded into the encoder and how the vector is produced
The goal of an encoder decoder architecture is to "compress" high dimensional information be it an image or a natural language sentence into a dense lower dimensional representation.
Then, in order to minimize the loss generated during learning, the encoder is forced to learn to represent the input data in a lower dimensional space without losing so much information that it makes it impossible for the decoder to recover the input. This could essentially be seen as the encoder learning to "summarize" its input.
Again the encoder learns to encode natural language into vector space representations in the same way the decoder learns to decode it. Through training and backpropagation. Showing this process in any meaningful way is difficult because it happens in a deep neural network consisting of RNN blocks. If you dont know what RNNs are i recommend learning about that architecture first.
what am i meant to understand from this??
enunciation and audio are so suppressed.
Far too complicated and difficult to understand.
Supply Chains
Are you the consumer the user or the creator?
I'm not alone 😂
*frantic notation intensifies *
Microeconomics
Really google. We can do better than this 🙄
worst explanation
/feel/