There are maybe 30 videos on this topic and this is the only one that does not suddenly make a massive jump across whole concepts that the presenter knows but the watcher does not.
Excellent explanation. I did some study on this topic before coming here and the reason was because so many terms and concepts were quite overwhelming. I generally understood those but still missed the fine tuned clarity. After watching this video, most of what I read before started making a lot of sense. I highly recommend this video. Thank you so much.
Thank you. It is very clear and informative, though i really think you (AssemblyAI) should lose the music on the background; it is distracting and it gives the whole thing an infomercial feeling.
Very Good video. I second the other comments. PLEASE drop the music completely. It would increase the quality of the experience by at least 70%. I had hard time finishing the video because of the music
Thanks for the video! I've enjoyed watching and liked the format and pace. I'd add the retrowave background to my playlist if I knew the name. I guess that people would note it less if the volume was lower.
Excellent Explanation. I have one question please how could I fit my model with this embedding vectors cause for Example in one of my projects for extracting informations from fils. instead of using texts for training my models I thinked of using embedding but I don't know the best way to represent them to my model . I hope u understand my question and thank you.
Yes - to all videos you suggest making! Great guide thank you.. was struggling to see value in lemmatization and concerned a bout a loss of coherence. Seeing several worked examples are great. Interested how the final results were all different but all had similarly high percentage match. How do you tackle this?
Brilliant video, as always, thanks so much. Would love to see your suggested follow on using pre-trained word embeddings for sentiment analysis if you ever have time 🙂
Thanks for the video I do have a question when you said that for instance in the CBOW there is only one layer it means that the ouput of this layer should be a vector of size dimension of the embedding but in order to train the model we need to compaire this output with the word in the midlle which is actually a one hot encoded vector of size dimension of the vocabulary so it migth have another layer and a softmax.
Is there a sentiment training model video that builds from this? Trying to build a recommendation system based on candidate sentences and a job description
From the embeddings of your name, I removed those of "work", added "great" and "relationship" and I came up with the embeddings of my own name? How come? Mere coincidence? 🤔🤔 Great video, btw!
Depends on your use case, cuz lets say if your use case contains more in general words like tea, king, actor, etc. then you may try different embeddings and see for yourself which ones are working well for particular examples from your use case OR If your use case is quite specific, something like say representing skills as a vector then you may need to train your own word2vec model on your data since pretrained embeddings may not cover what you need
if we have a sentence "vishy eat bread". then we vectorize the word "eaat"(misspelled word), why does fasttext see that the word "eaat" is more similar to the word "eat"?. How is the architecture?, is it possible for fasttext without using skipgram to be able to classify words?. Thanks
I love that all your examples are Lord of the Rings quotes because I run the Digital Tolkien Project which applies computational text analysis techniques to the works of Tolkien :-)
Hi.. thank you for the video.. great introduction and also a practical example.. One request is to drop or reduce the intensity of the music. It was distracting.
Much appreciated for the awesome cristal clear explanation. However, (just a little detail) could you lower the background music ? It's kinda distracting for me.Actually I feel like waiting for a customer service on the phone. If there is anyone who feel same way, please let me know guys!
Your pretty face holds my concentration, and thus I understand anything taught by you, especially transformer, more than any other youtube video..Thank you so much for such videos...indebted!
Great video! As for your analogy, I would guess that changing cocktail to bar would indeed give you cocktail. The analogy of having dinner at a restaurant, is not matching to having bar at cocktail.
Would love a video on ELMo further. Thanks for all this!
There are maybe 30 videos on this topic and this is the only one that does not suddenly make a massive jump across whole concepts that the presenter knows but the watcher does not.
amazing video. Perfectly clear speech, good explanations, logical visualisations and the background music makes it a lot easier to focus. Thank you!!
Excellent explanation. I did some study on this topic before coming here and the reason was because so many terms and concepts were quite overwhelming. I generally understood those but still missed the fine tuned clarity. After watching this video, most of what I read before started making a lot of sense. I highly recommend this video. Thank you so much.
This is great to hear! You are very welcome!
Thank you. It is very clear and informative, though i really think you (AssemblyAI) should lose the music on the background; it is distracting and it gives the whole thing an infomercial feeling.
Somehow the music had a motivational influence for me. I caught myself vibing to it a few times
I would love the no-music option too.
Amazing Content.. Exactly what a learner wants .. to Have all the concepts in a single Video with easy to understand way in minimum time..
Excellent ! Thank you so much for making an absolutly clear explanation.
great explanation. please explain elmo and other approaches. also please make a video about efficient ways of clustering the embeddings👍
Thank you Sajjad for the suggestion!
Thanks for taking the time to break this down and share!
You are very welcome! - Mısra
Very Good video. I second the other comments. PLEASE drop the music completely. It would increase the quality of the experience by at least 70%. I had hard time finishing the video because of the music
Thanks, will do!
Wow such a good presenter. I really like the examples super clear. This stuff is amazing
Awesome overview.. Loved it.. Waiting for videos explaining GloVe and Elmo..
Great to hear you liked it!
Great explanation! Thank you! Pls. drop the music for next videos.
çok teşekkürler, bu kadar iyi anlatan başka video yok
Thanks for the explanation of word embeddings. Nicely done!
great explanation. Please explain ELMO and GloVe. it was really great
Thank you for the suggestions!
@@AssemblyAII'd love to see those videos too
Thanks for the video! I've enjoyed watching and liked the format and pace. I'd add the retrowave background to my playlist if I knew the name. I guess that people would note it less if the volume was lower.
Thank you for great explanation! Would love to see pre-trained word embeddings for sentiment analysis.
simple and clear explanation. please explain Elmo, thanks
Very nice explanation of embedding concept, Would love to see pre-trained word embeddings for sentiment analysis.
The absolute best video I've seen on this topic!!
Great explanation! I went through the topics hours of hours. But this channel saved my time. And on target.
Great to hear!
Very informative and clear explanation
Thanks dear. Nicely paced intro. Good for recap.
Glad you liked it
8:15 i am having problem with the sentence "no of neurons in hidden layer = size of embedding".
i am confused what is size of embedding?
Excellent presentation. I will be teaching this topic to students shortly and will recommend this material.
Great to hear, thank you!
This was awesome. Would love to see Elmo video and sentiment analysis video you mentioned possibly making!
Thank you, I want to ask if there are any techniques that use Hidden Markov Models to represent the embeddings?
Thank u very clear. Need to know how to use word embedding for text classification
Why are you so good? Where where you all this while? God bless you
Thanks! Great information in a very objective way!
Great explanation in less amount of time. Really liked the video.
That's great to hear!
Great visual, Great Voice , Good pace of presentation . Everything is awesome in this video.
thanks for sharing :D
Thank you for the nice words Soheil! Glad it was helpful!
Simply the best 😊
Excellent Explanation. I have one question please how could I fit my model with this embedding vectors cause for Example in one of my projects for extracting informations from fils. instead of using texts for training my models I thinked of using embedding but I don't know the best way to represent them to my model . I hope u understand my question and thank you.
Yes - to all videos you suggest making! Great guide thank you.. was struggling to see value in lemmatization and concerned a bout a loss of coherence. Seeing several worked examples are great. Interested how the final results were all different but all had similarly high percentage match. How do you tackle this?
Brilliant video, as always, thanks so much. Would love to see your suggested follow on using pre-trained word embeddings for sentiment analysis if you ever have time 🙂
Thank you. Very good and complete explanation.
Great work !!
Can you make a video on Elmo and Transformer-based word embeddings ???
Great suggestion!
Interested in “Creating your own embedding before doing binary or multi label classification prediction”! Thanks for the clarity.
top video for embedding introduction
It's a really good explanation, thank you very much :)
You are welcome!
This is amazing. Can you share the python notebook you show at 12m33s?
Very interested in an in depth explanation of ElMo
would love to see a video on building Elmo Embedding model. Thanks for this one
Very well explained!! Thank you so much
You're welcome!
Would be great to see a video on Elmo!
Thank you for the suggestion, noted!
Thanks for the video I do have a question when you said that for instance in the CBOW there is only one layer it means that the ouput of this layer should be a vector of size dimension of the embedding but in order to train the model we need to compaire this output with the word in the midlle which is actually a one hot encoded vector of size dimension of the vocabulary so it migth have another layer and a softmax.
Great videos there, thank you for your content and keep up the good work!
Great and very illustrative video
Is there a sentiment training model video that builds from this? Trying to build a recommendation system based on candidate sentences and a job description
We don't have that video yet but thank you for the suggestion!
How large should data for a custom embedding be and is it possible to utilize a GPU for the creation of a word embedding vector space?
Great explanation! Thanks for sharing
From the embeddings of your name, I removed those of "work", added "great" and "relationship" and I came up with the embeddings of my own name? How come? Mere coincidence? 🤔🤔
Great video, btw!
Would it be possible to use word embedding to ask if a text is about a certain topic (or rather to what degree a text is about a topic)?
Clear explanation! 👍
Glad you think so!
Thanks for the explanation please try to make a video about how ELMOS works
Do you have a link to the Python notebook you go over at the end?
amazing video!!!❣❣❣ Thanks for sharing
I'm your fan already, please make an ELMo video....!!!
Thank you for the suggestion!
How do I know which embedding will be best choice for a specific use case? How do I know which distance measure will be best?
Depends on your use case, cuz lets say if your use case contains more in general words like tea, king, actor, etc. then you may try different embeddings and see for yourself which ones are working well for particular examples from your use case
OR
If your use case is quite specific, something like say representing skills as a vector then you may need to train your own word2vec model on your data since pretrained embeddings may not cover what you need
Can the embeddings from Transformer be used elsewhere, like with Word2Vec?
Well explained ! Thanks a lot
Thank youuuu it's my first video but I guess I should make your video my periorties I'm NLP thanks alot❤
if we have a sentence "vishy eat bread". then we vectorize the word "eaat"(misspelled word), why does fasttext see that the word "eaat" is more similar to the word "eat"?. How is the architecture?, is it possible for fasttext without using skipgram to be able to classify words?. Thanks
Very clear, thank you
Great tutorial. She speaks like a native speaker. She looks like a Turkish girl, beautiful one :)
I love that all your examples are Lord of the Rings quotes because I run the Digital Tolkien Project which applies computational text analysis techniques to the works of Tolkien :-)
That's amazing! Nice to meet you! Huge Tolkien fan here. :)
@@AssemblyAI you should join the Digital Tolkien Project!
@@AssemblyAI Pls provide the notebook code ..
thnx
Thanks that helped a lot.
Glad it helped
Great explanation
Thank you!
Awesome video!!
Great job 👍
Hi.. thank you for the video.. great introduction and also a practical example.. One request is to drop or reduce the intensity of the music. It was distracting.
Noted! Thank you for the feedback Praveen
Yes, great video but music is definitely too loud and distracting! It's really hard to concentrate on what you're saying.
Great video. Thanks for sharing it. It would be great if you do a task like train sentiment analysis model with word embedding and share with us.
nice video on word embedding keep it upp.............
Thank you!
is there a video about sentiment analysis yet?
Much appreciated for the awesome cristal clear explanation. However, (just a little detail) could you lower the background music ? It's kinda distracting for me.Actually I feel like waiting for a customer service on the phone. If there is anyone who feel same way, please let me know guys!
Your pretty face holds my concentration, and thus I understand anything taught by you, especially transformer, more than any other youtube video..Thank you so much for such videos...indebted!
thnak you soo much, amazing explaination and you beautiful
Thank you
Great video! As for your analogy, I would guess that changing cocktail to bar would indeed give you cocktail. The analogy of having dinner at a restaurant, is not matching to having bar at cocktail.
Great! Thanks
You're welcome!
you are very fast to capture for brginners
i would surely like to learn elmo guessing that chatgpt used the same correct me if i'm wrong 🙇🏻♂️
Video on Training a sentiment analysis model please
finally, somebody proved the king-queen thing :D
super helpful, but is there a version of this without the music?
Sorry about that! We got a lot of feedback in this. Let me see if we can upload without the music. :D
be great to see a video on Elmo.
Great!
Thank you!
Why is there a background soundtrack during the lecture? Does it help with learning or focus? I find it kinda distracting and feel rushed.
Do transformers from scratch. I heard they can be written in 50 lines. I would like to understand how bert encodes words
what about BOW?
Would like a video on elmo thanks
I like how you present the topic and your way of explanation.
But the background music is quite disturbing
Be interested in seeing a python example of Word2Vec.
Great content thanks. Due to a hearing problem I would appreciate it, if you could remove the backround music. Ok? Thanks
Noice !
Hi, Can you please tell you name. Going forward to learn more from you.
New crush added to life
nice and crisp, just one suggestion "please remove background music", It is reductive to the viewers experience :)
Thank you! And noted!