Thank you. It is very clear and informative, though i really think you (AssemblyAI) should lose the music on the background; it is distracting and it gives the whole thing an infomercial feeling.
There are maybe 30 videos on this topic and this is the only one that does not suddenly make a massive jump across whole concepts that the presenter knows but the watcher does not.
Excellent explanation. I did some study on this topic before coming here and the reason was because so many terms and concepts were quite overwhelming. I generally understood those but still missed the fine tuned clarity. After watching this video, most of what I read before started making a lot of sense. I highly recommend this video. Thank you so much.
Thanks for this easy to understand video! 😊At 8:25 it was mentioned that CBOW's embeddings are the input weights and in skip gram the output weights are used as embeddings. Does someone have a reference for that? Most papers I've found say that the input weights are used in both models.
Very Good video. I second the other comments. PLEASE drop the music completely. It would increase the quality of the experience by at least 70%. I had hard time finishing the video because of the music
Thanks for the video! I've enjoyed watching and liked the format and pace. I'd add the retrowave background to my playlist if I knew the name. I guess that people would note it less if the volume was lower.
Excellent Explanation. I have one question please how could I fit my model with this embedding vectors cause for Example in one of my projects for extracting informations from fils. instead of using texts for training my models I thinked of using embedding but I don't know the best way to represent them to my model . I hope u understand my question and thank you.
Yes - to all videos you suggest making! Great guide thank you.. was struggling to see value in lemmatization and concerned a bout a loss of coherence. Seeing several worked examples are great. Interested how the final results were all different but all had similarly high percentage match. How do you tackle this?
Thanks for the video I do have a question when you said that for instance in the CBOW there is only one layer it means that the ouput of this layer should be a vector of size dimension of the embedding but in order to train the model we need to compaire this output with the word in the midlle which is actually a one hot encoded vector of size dimension of the vocabulary so it migth have another layer and a softmax.
Is there a sentiment training model video that builds from this? Trying to build a recommendation system based on candidate sentences and a job description
Much appreciated for the awesome cristal clear explanation. However, (just a little detail) could you lower the background music ? It's kinda distracting for me.Actually I feel like waiting for a customer service on the phone. If there is anyone who feel same way, please let me know guys!
From the embeddings of your name, I removed those of "work", added "great" and "relationship" and I came up with the embeddings of my own name? How come? Mere coincidence? 🤔🤔 Great video, btw!
Depends on your use case, cuz lets say if your use case contains more in general words like tea, king, actor, etc. then you may try different embeddings and see for yourself which ones are working well for particular examples from your use case OR If your use case is quite specific, something like say representing skills as a vector then you may need to train your own word2vec model on your data since pretrained embeddings may not cover what you need
Brilliant video, as always, thanks so much. Would love to see your suggested follow on using pre-trained word embeddings for sentiment analysis if you ever have time 🙂
if we have a sentence "vishy eat bread". then we vectorize the word "eaat"(misspelled word), why does fasttext see that the word "eaat" is more similar to the word "eat"?. How is the architecture?, is it possible for fasttext without using skipgram to be able to classify words?. Thanks
Hi.. thank you for the video.. great introduction and also a practical example.. One request is to drop or reduce the intensity of the music. It was distracting.
I love that all your examples are Lord of the Rings quotes because I run the Digital Tolkien Project which applies computational text analysis techniques to the works of Tolkien :-)
Great video! As for your analogy, I would guess that changing cocktail to bar would indeed give you cocktail. The analogy of having dinner at a restaurant, is not matching to having bar at cocktail.
Your pretty face holds my concentration, and thus I understand anything taught by you, especially transformer, more than any other youtube video..Thank you so much for such videos...indebted!
Thank you. It is very clear and informative, though i really think you (AssemblyAI) should lose the music on the background; it is distracting and it gives the whole thing an infomercial feeling.
Somehow the music had a motivational influence for me. I caught myself vibing to it a few times
I would love the no-music option too.
There are maybe 30 videos on this topic and this is the only one that does not suddenly make a massive jump across whole concepts that the presenter knows but the watcher does not.
Would love a video on ELMo further. Thanks for all this!
amazing video. Perfectly clear speech, good explanations, logical visualisations and the background music makes it a lot easier to focus. Thank you!!
Excellent explanation. I did some study on this topic before coming here and the reason was because so many terms and concepts were quite overwhelming. I generally understood those but still missed the fine tuned clarity. After watching this video, most of what I read before started making a lot of sense. I highly recommend this video. Thank you so much.
This is great to hear! You are very welcome!
Amazing Content.. Exactly what a learner wants .. to Have all the concepts in a single Video with easy to understand way in minimum time..
Wow such a good presenter. I really like the examples super clear. This stuff is amazing
Excellent ! Thank you so much for making an absolutly clear explanation.
These are great explanations of the semantics. I haven’t done much since 1-hot vectors in the 2012 coursera ML course.
Thanks for this easy to understand video! 😊At 8:25 it was mentioned that CBOW's embeddings are the input weights and in skip gram the output weights are used as embeddings. Does someone have a reference for that? Most papers I've found say that the input weights are used in both models.
Thanks for taking the time to break this down and share!
You are very welcome! - Mısra
great explanation. please explain elmo and other approaches. also please make a video about efficient ways of clustering the embeddings👍
Thank you Sajjad for the suggestion!
Awesome overview.. Loved it.. Waiting for videos explaining GloVe and Elmo..
Great to hear you liked it!
Very Good video. I second the other comments. PLEASE drop the music completely. It would increase the quality of the experience by at least 70%. I had hard time finishing the video because of the music
Thanks, will do!
simple and clear explanation. please explain Elmo, thanks
8:15 i am having problem with the sentence "no of neurons in hidden layer = size of embedding".
i am confused what is size of embedding?
Thanks for the explanation of word embeddings. Nicely done!
Great explanation! I went through the topics hours of hours. But this channel saved my time. And on target.
Great to hear!
çok teşekkürler, bu kadar iyi anlatan başka video yok
Thank you, I want to ask if there are any techniques that use Hidden Markov Models to represent the embeddings?
Great explanation! Thank you! Pls. drop the music for next videos.
The absolute best video I've seen on this topic!!
Very informative and clear explanation
great explanation. Please explain ELMO and GloVe. it was really great
Thank you for the suggestions!
@@AssemblyAII'd love to see those videos too
Excellent presentation. I will be teaching this topic to students shortly and will recommend this material.
Great to hear, thank you!
Very nice explanation of embedding concept, Would love to see pre-trained word embeddings for sentiment analysis.
Thanks for the video! I've enjoyed watching and liked the format and pace. I'd add the retrowave background to my playlist if I knew the name. I guess that people would note it less if the volume was lower.
Thank you for great explanation! Would love to see pre-trained word embeddings for sentiment analysis.
Excellent Explanation. I have one question please how could I fit my model with this embedding vectors cause for Example in one of my projects for extracting informations from fils. instead of using texts for training my models I thinked of using embedding but I don't know the best way to represent them to my model . I hope u understand my question and thank you.
Thanks dear. Nicely paced intro. Good for recap.
Glad you liked it
Thank u very clear. Need to know how to use word embedding for text classification
This was awesome. Would love to see Elmo video and sentiment analysis video you mentioned possibly making!
Great explanation in less amount of time. Really liked the video.
That's great to hear!
Why are you so good? Where where you all this while? God bless you
Thanks! Great information in a very objective way!
This is amazing. Can you share the python notebook you show at 12m33s?
Thank you. Very good and complete explanation.
Yes - to all videos you suggest making! Great guide thank you.. was struggling to see value in lemmatization and concerned a bout a loss of coherence. Seeing several worked examples are great. Interested how the final results were all different but all had similarly high percentage match. How do you tackle this?
Interested in “Creating your own embedding before doing binary or multi label classification prediction”! Thanks for the clarity.
top video for embedding introduction
Very interested in an in depth explanation of ElMo
Would it be possible to use word embedding to ask if a text is about a certain topic (or rather to what degree a text is about a topic)?
Do you have a link to the Python notebook you go over at the end?
Great and very illustrative video
Thanks for the video I do have a question when you said that for instance in the CBOW there is only one layer it means that the ouput of this layer should be a vector of size dimension of the embedding but in order to train the model we need to compaire this output with the word in the midlle which is actually a one hot encoded vector of size dimension of the vocabulary so it migth have another layer and a softmax.
Simply the best 😊
Great videos there, thank you for your content and keep up the good work!
Would be great to see a video on Elmo!
Thank you for the suggestion, noted!
Is it possible to create a map of related words with different relationships between them automatically in Python?
Can the embeddings from Transformer be used elsewhere, like with Word2Vec?
Great work !!
Can you make a video on Elmo and Transformer-based word embeddings ???
Great suggestion!
Very well explained!! Thank you so much
You're welcome!
Thank youuuu it's my first video but I guess I should make your video my periorties I'm NLP thanks alot❤
How large should data for a custom embedding be and is it possible to utilize a GPU for the creation of a word embedding vector space?
Great explanation! Thanks for sharing
would love to see a video on building Elmo Embedding model. Thanks for this one
Great tutorial. She speaks like a native speaker. She looks like a Turkish girl, beautiful one :)
Why is there a background soundtrack during the lecture? Does it help with learning or focus? I find it kinda distracting and feel rushed.
is there a video about sentiment analysis yet?
Is there a sentiment training model video that builds from this? Trying to build a recommendation system based on candidate sentences and a job description
We don't have that video yet but thank you for the suggestion!
Much appreciated for the awesome cristal clear explanation. However, (just a little detail) could you lower the background music ? It's kinda distracting for me.Actually I feel like waiting for a customer service on the phone. If there is anyone who feel same way, please let me know guys!
From the embeddings of your name, I removed those of "work", added "great" and "relationship" and I came up with the embeddings of my own name? How come? Mere coincidence? 🤔🤔
Great video, btw!
It's a really good explanation, thank you very much :)
You are welcome!
amazing video!!!❣❣❣ Thanks for sharing
How do I know which embedding will be best choice for a specific use case? How do I know which distance measure will be best?
Depends on your use case, cuz lets say if your use case contains more in general words like tea, king, actor, etc. then you may try different embeddings and see for yourself which ones are working well for particular examples from your use case
OR
If your use case is quite specific, something like say representing skills as a vector then you may need to train your own word2vec model on your data since pretrained embeddings may not cover what you need
Well explained ! Thanks a lot
Great visual, Great Voice , Good pace of presentation . Everything is awesome in this video.
thanks for sharing :D
Thank you for the nice words Soheil! Glad it was helpful!
you are very fast to capture for brginners
Brilliant video, as always, thanks so much. Would love to see your suggested follow on using pre-trained word embeddings for sentiment analysis if you ever have time 🙂
if we have a sentence "vishy eat bread". then we vectorize the word "eaat"(misspelled word), why does fasttext see that the word "eaat" is more similar to the word "eat"?. How is the architecture?, is it possible for fasttext without using skipgram to be able to classify words?. Thanks
Thanks for the explanation please try to make a video about how ELMOS works
Very clear, thank you
nice video on word embedding keep it upp.............
Thank you!
Video on Training a sentiment analysis model please
Thanks that helped a lot.
Glad it helped
i would surely like to learn elmo guessing that chatgpt used the same correct me if i'm wrong 🙇🏻♂️
I'm your fan already, please make an ELMo video....!!!
Thank you for the suggestion!
Clear explanation! 👍
Glad you think so!
Hi.. thank you for the video.. great introduction and also a practical example.. One request is to drop or reduce the intensity of the music. It was distracting.
Noted! Thank you for the feedback Praveen
Yes, great video but music is definitely too loud and distracting! It's really hard to concentrate on what you're saying.
Awesome video!!
super helpful, but is there a version of this without the music?
Sorry about that! We got a lot of feedback in this. Let me see if we can upload without the music. :D
Great job 👍
thnak you soo much, amazing explaination and you beautiful
Thank you
Great explanation
Thank you!
I love that all your examples are Lord of the Rings quotes because I run the Digital Tolkien Project which applies computational text analysis techniques to the works of Tolkien :-)
That's amazing! Nice to meet you! Huge Tolkien fan here. :)
@@AssemblyAI you should join the Digital Tolkien Project!
@@AssemblyAI Pls provide the notebook code ..
thnx
Would like a video on elmo thanks
be great to see a video on Elmo.
Great video. Thanks for sharing it. It would be great if you do a task like train sentiment analysis model with word embedding and share with us.
Great video! As for your analogy, I would guess that changing cocktail to bar would indeed give you cocktail. The analogy of having dinner at a restaurant, is not matching to having bar at cocktail.
finally, somebody proved the king-queen thing :D
Great content thanks. Due to a hearing problem I would appreciate it, if you could remove the backround music. Ok? Thanks
Great! Thanks
You're welcome!
Your pretty face holds my concentration, and thus I understand anything taught by you, especially transformer, more than any other youtube video..Thank you so much for such videos...indebted!
what about BOW?
I like how you present the topic and your way of explanation.
But the background music is quite disturbing
New crush added to life
Can you make a video about ELmo?
Noted!
Hi, Can you please tell you name. Going forward to learn more from you.
Be interested in seeing a python example of Word2Vec.