I’ve watched a lot of videos on TH-cam. So many with animations etc. I nearly lost hope thinking I would never be able to grasp this concept. This is the only one that truly explains what the word embedding is and how it’s being derived in just a simple manner. Thank you so much
thank you for saying that bit about word2vec being outdated. a coworker was lobbying to use it for one of our projects and this helped nip that in the bud.
One unsolicited piece of advice. You got a profound knowledge of AI. You should share this knowledge by making more videos on several AI topics. I hope every AI aspirant gets a chance to watch your videos. Keep it up..:)
jay, how does training LLMs differ from training text embedding models? or is an embedding model a byproduct of training an LLM? Like in transformers where text are converted to embeddings first before being fed to to the transformer blocks. Thanks!
3:32 "...Jay is 38 on the 0 to 100 scale... so -.4 on the -1 to 1 scale...": How is that? I get -.24. If it's -.4 on the -1 to 1 scale, that's 30 on the 0 to 100 scale. Please fix my math.
I’ve watched a lot of videos on TH-cam. So many with animations etc. I nearly lost hope thinking I would never be able to grasp this concept. This is the only one that truly explains what the word embedding is and how it’s being derived in just a simple manner. Thank you so much
thank you for saying that bit about word2vec being outdated. a coworker was lobbying to use it for one of our projects and this helped nip that in the bud.
Thanks for these videos and your blog, I've learned so much from you. I always read your blog entries before dive in the original paper.
Personality scores is a great example!
One unsolicited piece of advice. You got a profound knowledge of AI. You should share this knowledge by making more videos on several AI topics. I hope every AI aspirant gets a chance to watch your videos.
Keep it up..:)
Great job! I enjoy very much your channel and blog! THK!
This guy is the best. He is a good guy.
You are great, please never stop
Thank you. Not related but I really want to know, what font are you using in your blog poster?
jay, how does training LLMs differ from training text embedding models? or is an embedding model a byproduct of training an LLM? Like in transformers where text are converted to embeddings first before being fed to to the transformer blocks. Thanks!
Thank u so much its great Explanation clear understand
Excellent
Very good explaination, one more thing, is word2vec using dimensional reduction too?, we can choose 50,100,200 dimensions? but how it works? Thanks
the goat
finally found the video, if you haven't watched then this is the one .
3:32 "...Jay is 38 on the 0 to 100 scale... so -.4 on the -1 to 1 scale...": How is that? I get -.24. If it's -.4 on the -1 to 1 scale, that's 30 on the 0 to 100 scale. Please fix my math.
That’s what was bothering me too
i agree.. I thought to it as well
I think your math is right, I got -0.24 too.
Yoo Flying Beast!!
why are the person turning big and turning small all the time through the video?
instead of explaining you went scrolling pages'. it was better if you have just kept it short and may be make other vid for subsequent sections.