Brilliant delivery my dear you are awesome, Its not easy to teach guys really appreciate your effort. Nice one i have subscribed to get more awesome delivery from you .nice work ,bless you
yea! part 2!! adjective of nearest neighbor...sorry kidding...gpu? U can do this? cuda!! hey U should checkout ''Matlab & simulink''! they have a Neural Network ''toolbox'' that is very helpful!! great vid!! this can get U up and running with NN's.. & processing with nVidia...good job!! U'r very helpful!! model. parameters() yup! gpu & DDR6..or HBM.. wow! U have worked really hard on this video!! congrats!
Not at all; embedding_dim has to do with the architecture of your neural network. You are basically transforming a one-dimensional vector into an n-dimensional vector by setting embedding_dim = n. The input features you're looking for are labeled as "features" in the video. You can see me targeting them inside the train function @ 04:46 : for feature, target in train_data: ... So the first value of the train_data tuple would hold the input_features, while the second value holds the target. Let me know if I explained it properly, it's very difficult to find simple words to describe neural network terms :)
Great videos! Would it be reasonable to combine this with one of your webscraping projects to expand the information on each word to contain its type (i.e. verb, adjective, etc) and its tense? I'm thinking that you could scrape dictionary.com for all that information, then build a CNN that makes more sense :)
I don't understand how you are calculating loss... As I understand your code, you assign each word a numerical value, not 1-hot encoding. You input 5 words and your code outputs a word (let's call it y_predicted). you calculate the loss by comparing y_predicted with y_actual. But there is no real relationship between the values of words in the word_to_index list. Words near each other are no more valuable than words far apart. What am I missing?
Word embeddings is a different approach from one-hot-encoding, if you'd like to checkout the official Pytorch n-gram modeling tutorial, you can find it here: pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html We are basically going through all the words in the database and teaching your network that "if you receive an input of the words 1,2,3,4,5 - the output should be 6". With this approach our network learned the entire database, but if it encounters words which didn't appear in the database given by the predict function - it won't be able to preform the prediction. It has to see the words in advance and learn them to be able to do so. I'll be filming an "introduction to neural networks" video shortly, stay tuned, as it would explain the concepts and give you a nice background before you're diving into the code 😀
are you getting this when loading the model? make sure you turn on the GPU runtime before you try to load it and give it another go 😀 If it's not the case - please let me know what line of code raised the error and I'll try to help 😉
@@PythonSimplified Wow, thank you so much for the response I really appreciate that! The error happens after running the train function, in the cell right after the cell in which we defined the function. I have changed the runtime type to GPU as well... and the cell even outputs that gpu is available. One thing i forgot to mention is the error proceeds with "(when checking arugment for argument target in method wrapper_nll_loss_forward)". Maybe that will help me find the culprit?
Захар, у меня очень смешной акцент по русски, и я не знаю много важных слов. Я просто с 6 и лет не жыву в Крыму, так что и мой запас русских слов как у 6 и-летной девочки 🤣🤣🤣 может зделаю титри место озвучки, ето поможет? Проверь канал Murrengan, он тоже очень хорошо учит и всё по русски: th-cam.com/channels/8D-Zw9iR6pRyGOXHVqzlQw.html
Мария, ещё я бы тебе посоветовал при монтаже затирать свои личные данные. А всяко бывает. Пересмотри этот ролик th-cam.com/video/P4F3PzCMrtk/w-d-xo.html и впредь будь аккуратней !
Hey everyone, there was a slight issue with index_to_word in the starter notebook, it's all fixed now!
My apologies for not spotting it earlier :(
Very cool! And very happy I found this channel!
Hi Mariya... Thanks for the Nice & clean Explaination. 👍
Can you please make videos on Optimizing such Naive NN & Advance AI : )
Great video! Please do more machine learning videos!
Brilliant delivery my dear you are awesome, Its not easy to teach guys really appreciate your effort. Nice one i have subscribed to get more awesome delivery from you .nice work ,bless you
Thank you so much Ayenco!! 😄
I am still super excited about your comment of Facebook, and now this??? You totally made my day!!! 🙂🙂🙂
Oh dear, this is epic!
Thank you! :)
Very well done
yea! part 2!! adjective of nearest neighbor...sorry kidding...gpu? U can do this? cuda!! hey U should checkout ''Matlab & simulink''! they have a Neural Network ''toolbox'' that is very helpful!! great vid!! this can get U up and running with NN's.. & processing with nVidia...good job!! U'r very helpful!!
model. parameters() yup! gpu & DDR6..or HBM.. wow! U have worked really hard on this video!! congrats!
another really helpful new video, 😍🙏
Thank you so much for your support Elle! :D
do you have Reinforcement Learning videos?
or NEAT ?
hi, how did this end up with you now ? is it worth doing ?
Great video. Please use a larger terminal font size, I can just barely read it.
does embedding_dim mean input_features?
Not at all; embedding_dim has to do with the architecture of your neural network. You are basically transforming a one-dimensional vector into an n-dimensional vector by setting embedding_dim = n.
The input features you're looking for are labeled as "features" in the video.
You can see me targeting them inside the train function @ 04:46 :
for feature, target in train_data:
...
So the first value of the train_data tuple would hold the input_features, while the second value holds the target.
Let me know if I explained it properly, it's very difficult to find simple words to describe neural network terms :)
Great videos! Would it be reasonable to combine this with one of your webscraping projects to expand the information on each word to contain its type (i.e. verb, adjective, etc) and its tense? I'm thinking that you could scrape dictionary.com for all that information, then build a CNN that makes more sense :)
I don't understand how you are calculating loss... As I understand your code, you assign each word a numerical value, not 1-hot encoding. You input 5 words and your code outputs a word (let's call it y_predicted). you calculate the loss by comparing y_predicted with y_actual. But there is no real relationship between the values of words in the word_to_index list. Words near each other are no more valuable than words far apart. What am I missing?
Word embeddings is a different approach from one-hot-encoding, if you'd like to checkout the official Pytorch n-gram modeling tutorial, you can find it here:
pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html
We are basically going through all the words in the database and teaching your network that "if you receive an input of the words 1,2,3,4,5 - the output should be 6". With this approach our network learned the entire database, but if it encounters words which didn't appear in the database given by the predict function - it won't be able to preform the prediction. It has to see the words in advance and learn them to be able to do so.
I'll be filming an "introduction to neural networks" video shortly, stay tuned, as it would explain the concepts and give you a nice background before you're diving into the code 😀
Marvellous.
Подписка, лайк 👍
супер! спосибо Николай! 😁😁😁
anybody else getting RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
are you getting this when loading the model? make sure you turn on the GPU runtime before you try to load it and give it another go 😀
If it's not the case - please let me know what line of code raised the error and I'll try to help 😉
@@PythonSimplified Wow, thank you so much for the response I really appreciate that! The error happens after running the train function, in the cell right after the cell in which we defined the function. I have changed the runtime type to GPU as well... and the cell even outputs that gpu is available. One thing i forgot to mention is the error proceeds with "(when checking arugment for argument target in method wrapper_nll_loss_forward)". Maybe that will help me find the culprit?
do you have the completed code available anywhere? If so I could see if its just my code or if its something else.
you could make a new video about predicting the next word in a text message.
I recommend watch tutorials in 0.75x speed.
Hi Lizard, was I too fast or was my accent too heavy?? 😅
@@PythonSimplified I want to understand, what does mean in ai
Мария, попробуй выложить видео с русской озвучкой!
Захар, у меня очень смешной акцент по русски, и я не знаю много важных слов. Я просто с 6 и лет не жыву в Крыму, так что и мой запас русских слов как у 6 и-летной девочки 🤣🤣🤣
может зделаю титри место озвучки, ето поможет?
Проверь канал Murrengan, он тоже очень хорошо учит и всё по русски:
th-cam.com/channels/8D-Zw9iR6pRyGOXHVqzlQw.html
Мария, смешной акцент, это ещё лучше ))) Ты попробуй, а если надо помощь с русским - обращайся )))
Мария, ещё я бы тебе посоветовал при монтаже затирать свои личные данные. А всяко бывает. Пересмотри этот ролик th-cam.com/video/P4F3PzCMrtk/w-d-xo.html и впредь будь аккуратней !
Single?
Is it normal to not understand ?
Machine Learning FOR BEGINNERS 1, 2 3, of course, but Pytorch 4 and 5 are too much for a beginner.