Too the point and Simple. Thanks a lot. Do you mind sharing tools used to make this beautiful piece of art? Looking to learn making videos and share with students.
The loss function learns from how words naturally appear together in text. It doesn't need an absolute "correct" answer - instead, it measures how well the model predicts actual word co-occurrences in the training data. If words like "cat" and "drinks" frequently appear near each other, the model learns to expect this pattern, and gets penalized when it predicts unrelated words.
Top notch explanation with amazing animations!!
Appreciate it 🙏🏾
New Achievement Unlocked: Found another awesome channel to subscribe and watch grow 🌟🌟
Loving the motion graphics!
Fantastic breakdown
This really was a high quality video thank you
Keep going what are you doing my friend. i'll always be here supporting you.
What a great video, Loved it ❤
Too the point and Simple. Thanks a lot.
Do you mind sharing tools used to make this beautiful piece of art? Looking to learn making videos and share with students.
🙏🏾. My tools are just adobe premiere, Hex, and notion
Cool vid! What is that tool to do the word analogies and visualizations?
what the fug , did this awesome video just popped up in my algorithm ?
😏
Subbing and commenting and liking to boost algorithm
🤝
But how does the loss function work if the model doesn t know what is correct. And we humans could not judge the loss factually
Think I understood: the model compares the probability of these worlds showing up together in other texts. Am I right? Thanks for this great video
The loss function learns from how words naturally appear together in text. It doesn't need an absolute "correct" answer - instead, it measures how well the model predicts actual word co-occurrences in the training data. If words like "cat" and "drinks" frequently appear near each other, the model learns to expect this pattern, and gets penalized when it predicts unrelated words.