Thanks… 13:14 I have a question that why do you use precision, recall…. metrics (metrics for classification)? And how does model calculate that, because its not discrete value. I am a newbie
thanks. when you test, you are using data from training. i am referring to this line: long_test = wide_to_long( ) the parameter should be data['test'].. please correct me if i am wrong,
thanks. i see a problem with calling make_tf_dataset() just once for training. this function returns a size of 512 in tensor type. you are using this data just once for training. i think you need to put this into a loop. or make the batch size bigger. am i missing out in understanding?
My second question is, if interactions were not encoded as binary, but encoded as the actual ratings (explicit feedback rather implicit feedback), does your provided code still produce meaningful ncf_predcitons?
I did not really understand what these ncf_predictions means for the prediction. Does higher ncf_prediction value for specific (user_id,item_id) means they should be recommended to the user? Then, during the recommendation phase, for every (user_id,item_id) pair, should I recommend the item_id with the highest ncf value to that user?
Yes, the highest predicted values that the user has not already seen/bought should be recommended. The ncf_predictions is basically the models' "guess" of whether you'd buy/watch by yourself (and we approximate "watched" = "liked").
@@murilo-cunha Thank you for the answers. Do you also have any recommendations to reduce the training time of the NCF model. I currently have 138k users and 1470 items. It takes more than days to finish the training process.
@@efesencan8079 Hmm nothing in particular to this. You can always reduce the model size (layers, embedding size, etc.), scale your training up (get a more powerful machine - GPUs, etc.) or scale out (distributed training with SparkML or something). It's a bit hard to say without more specific info. Hope this helps!
There are some links in the description. For google colab: colab.research.google.com/github/murilo-cunha/inteligencia-superficial/blob/master/_notebooks/2020-09-11-neural_collaborative_filter.ipynb
thanks when you run a test, the results do not look good. for those with 'interaction' equal to 1, the prediction should be close to 1. but this is not the case.
thanks , but the problem with these kind of videos is , you are talking to an expert guy who know all these things, but someone who does not know these things will not understand anything ! i hope in future videos be more detailed and slowly explain each steps not only read slides !
no one can hold your hands through everything; you need to do some research on ur own to get a feel for the context of this domain. I'd suggest you to do that first and then come back to re-watch the video.
@@jagicyooo2007 Thanks for replay , i learnt and already built a recommender system and i understood these kind of videos is wasting time ! people should learn how to implement it not just short videos and highlights .
Thank you, loved the explaination, you covered quite a lot in very less time and also very clearly
Glad you liked it
Thanks… 13:14 I have a question that why do you use
precision, recall…. metrics (metrics for classification)? And how does model calculate that, because its not discrete value. I am a newbie
thanks.
when you test, you are using data from training.
i am referring to this line:
long_test = wide_to_long( )
the parameter should be data['test']..
please correct me if i am wrong,
thanks.
i see a problem with calling make_tf_dataset() just once for training.
this function returns a size of 512 in tensor type.
you are using this data just once for training.
i think you need to put this into a loop.
or make the batch size bigger.
am i missing out in understanding?
My second question is, if interactions were not encoded as binary, but encoded as the actual ratings (explicit feedback rather implicit feedback), does your provided code still produce meaningful ncf_predcitons?
I believe it should (it's been a while). The only thing you want to modify is to normalize the actual ratings between 0 and 1.
I did not really understand what these ncf_predictions means for the prediction. Does higher ncf_prediction value for specific (user_id,item_id) means they should be recommended to the user? Then, during the recommendation phase, for every (user_id,item_id) pair, should I recommend the item_id with the highest ncf value to that user?
Yes, the highest predicted values that the user has not already seen/bought should be recommended. The ncf_predictions is basically the models' "guess" of whether you'd buy/watch by yourself (and we approximate "watched" = "liked").
@@murilo-cunha Thank you for the answers. Do you also have any recommendations to reduce the training time of the NCF model. I currently have 138k users and 1470 items. It takes more than days to finish the training process.
@@efesencan8079 Hmm nothing in particular to this. You can always reduce the model size (layers, embedding size, etc.), scale your training up (get a more powerful machine - GPUs, etc.) or scale out (distributed training with SparkML or something). It's a bit hard to say without more specific info. Hope this helps!
hi @Efe Sencan can you give me the link to your dataset please, i am having trouble finding one, i am also working on social media users.
thanks.. I have question on userid information... is it possible to provide user related information as input to model?
Yes you can. But then you are moving towards a more hybrid approach (as opposed to the collaborative filtering approach in the video).
hello,thanks for this video if u can pls send me the code plz
There are some links in the description. For google colab:
colab.research.google.com/github/murilo-cunha/inteligencia-superficial/blob/master/_notebooks/2020-09-11-neural_collaborative_filter.ipynb
@@murilo-cunha thanks a lot
thanks
when you run a test, the results do not look good.
for those with 'interaction' equal to 1, the prediction should be close to 1.
but this is not the case.
thanks , but the problem with these kind of videos is , you are talking to an expert guy who know all these things, but someone who does not know these things will not understand anything ! i hope in future videos be more detailed and slowly explain each steps not only read slides !
no one can hold your hands through everything; you need to do some research on ur own to get a feel for the context of this domain. I'd suggest you to do that first and then come back to re-watch the video.
@@jagicyooo2007 Thanks for replay , i learnt and already built a recommender system and i understood these kind of videos is wasting time ! people should learn how to implement it not just short videos and highlights .