People are terrified at the thought of machines taking over, but actually the algorithms being used in AI and recommendation system are just as inacurate as a friend's recommendation.
I think it should be noted that for the cold start problem, you'd want to use content filtering to define which users to show those new items to - hence, a combination of content and collaborative filtering is the best approach.
These are some solid gold videos on your channel you are putting up for free! Your incredible knowledge, such hardwork and the will to put such amazing educational concepts before the audience is really creating these masterpieces! Absolutely love it! 💗
Love the video, thank you, great explanation. I wonder if I’m the only one who finds the music a bit...creepy or disturbing....or, maybe that’s intended. Rewatching, I see that may be my fault for watch at 2x speed.
My god, the music put me in a freaking trauma. The explanation was great but I had to turn off the audio. What the heck did the creator think? Since when horror music as background music is a good idea?
I love all the artistic choices you guys make when putting these videos together, they have a spacious mood to them. It’s a little sad to read other viewers don’t like the music choice as much, each to their own I guess.
@@ArtOfTheProblem I found it distracting - I thinbk it is simply to high to be "background music". I paused the video several times because I was doing something else in the mean time and I thought another video started or something.
I am a mathematics phd student doing my thesis on low-rank matrix completion, it was great seeing this video show up in my feed! One of my biggest concerns was why we can assume that real life data is part of a low-rank matrix. Even though data being non-random and part of a low-dimensional space is a very reasonable assumption, the issue is that the space of low rank matrices is a very specific low-dimensional space, so why should we assume that our data lies on this specific low dimensional space? The features argument seems fair to me as why it may be reasonable to assume that our data is low-rank.
It's a great question. I'm currently working on a video on manifold hypothesis that gets at this question a little deeper. Would love to hear other's thoughts
That's very interesting, I like the field of prediction/compression/NMF fact a lot. Do you have some references or papers on the subject you mentioned ? How do you define real life data ?
@@lucacaccistani9636 Here is a paper on matrix completion that describes the alternating projection method, and some theoretical results using algebraic geometry: arxiv.org/abs/1711.02151 By real life data I mean data that comes from real life, such as an image or the incomplete user ratings in the netflix problem. Given unknown positions of a matrix, it's easy to find a partially complete matrix which can be completed to a rank r matrix. Just generate a rank r matrix then delete entries in the unknown indices, then we know the resulting incomplete matrix has a rank r completion. If we choose the known entries of a partially complete matrix randomly from a continuous distribution, then often times there will exist a rank r completion with probability 0, or there will be infinitely many rank r completions. However, it is assumed that our data lies on some low dimensional space, so choosing random known entries may not be a good model for real data.
In reverse, doesn't the utility of the approximation (people do seem to like the recommendations) provide some clue that there is a lower-dimensional manifold useful for the purpose of estimating *specifically* the preferences of people regarding movies? Also, if true randomness provides maximum information, and for the most part people's movie preferences, and movies themselves, are far from random, doesn't that also imply that there will be a useful, lower-dimensional manifold? All while keeping in mind that the movies people make and the movies that people watch are reflections of each other: people make movies that other people want to watch, and people only watch what movies people make.
Interesting video. I downloaded my netflix data once. It is amazing how much data they actually collect . One of the bits they collect is how long you watch each video (whether the actual movie) or the preview clip on the movie selection screen. i.e. If you watch the whole thing, you are somewhat interested in it and "that type of movie". It also logs what suggestions it gave to you and why that suggestion was given (due to another video) . It also collects search terms (full / partial) and what results were given to you. i.e. You type "term" and up comes "Terminator 1,2,3" , "The terminal" (totally different type of movie)
The real problem here is traditional recommendation algorithm would recommend to you with things you already have. We need a new algorithm which can analyze and tell you what you may need to get in future, based on historical data.
Very interesting!! I would love to see more videos on this topic. I would guess that the amount of features can be increased in order to have a more accurate result, at the expense of greater computing power and storage requirements.
It's worth noting that the patterns in data don't always mirror reality. People with asthma and copd check earlier with the doctor when they have trouble breathing. So an ann would predict that people with asthma are at lower risk when catching pneumonia and schedule them accordingly. These problems are frustratingly hard to find. Most approaches to make answers interpretable seem to be about learning a linear local approximation of the machine model, which works reasonably well on convolutional networks.
"The things that are recommended to you are based on patterns the machine has observed in other people that are similar to yourself" It would be interesting to take this to the next step of analysis.. what happens when the recommendations the machine gives start to have an actual tangible affect on the people being given the recommendations?
Can you explain me what if there are many, many ways to generate the current data? At that time does it mean that we will have multiple reference table? How do we fix this problem?
Notes for my future revision. *CONTENT FILTERING* Based on what someone like, work out what else he/she might like. *COLLABORATIVE FILTERING* A user likes things that other users with similar habit also like.
Hi, where did you get the images of netflix about content filtering at 4:16 in the video? I need it for my dissertation as a talking point that Netflix was a content filtered recommender at one point, thanks!
Lemmino if you find any more channels like this. These days prediction has made youtube channel subscription less important. However, I use them just as an A-list. Btw I subscribed.
Hi, for your explanation on collaborative filtering are you explaining from the model-based approach, I'm a little bit confused between the memory and model-based approach fro CBF
I was asking myself the same question but I think it's smth like: you take the highest value (in this case 28) and you know that you need a value that is less or equal to 4, so you solve the inéquation 28/x
Not really, no. There is always some sort of classifying being done but in this case not in the way you mean it, I think. In this approach we decide (by hand or algorithmically, but always beforehand) that we are going to reduce the data space to a smaller space of dimension k. Choosing k is often difficult. Then, the main algorithm converges to the optimal representation, that is, the space of dimension k that represents the best the data space. You can look up NMF factorization, k-means clustering or even PCA (the last one doesn't has a k and tends to over-fit, in the end you have the same problem of choosing when you stop, hence choosing k..)
what's happening to you tube, why do my videos keep stopping suddenly then starting back. but when i sign into another account using a vpn that doesn't happen, and I'm watching the same video.
Recommendation algorithms don't have enough actually useful data. What do I mean by that? First, they recommend things based on what you have watched, assuming you are interested in that topic. For example, I may click on a random video about a bass player or bass guitar style.....then my feed is full of bass guitar channels. NO, I was curious about the video, but I don't want bass channels. Also, they don't take a good survey of the person's tastes. for example, Netflix could have a customer take a "what do you like" survey, 4 or 5 pages of 20 - 30 various movies, shows, etc, have the customer pick 8-10 on each page. Essentially sprinkle in enough variety on each page to get a more accurate read on their tastes. Netflix suggestions are usually 50% wrong for me. I would love if they would allow a "don't recommend" option along with like, don't like, etc so it never shows up again in the normal lists. Grapes of wrath and Annie Hall are NOT on any list I would ever create. ahhahaha. would also be great if we could exclude specific actors, directors, etc to keep them from the list as well, considering I can't stand Will Ferrell.
STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel" SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1 WATCH AI series: th-cam.com/play/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ.html
Well it's the same "reversal" of programming logic - non NN: you give the computer some input and an algorithm and the computer will give you the output - NN: you give the computer some input and output and the computer will give you the algorithm (the neural network) (this is only true for training of course, at inference time you will use the input and the learned algorithm to get the output again, but the learning part is kind of like the "solving the problem" part) Now that I think about it, mentally I originally compared this to deep learning (neural network with more than 3 layers), but collaborative filtering seems more like a single layer neural network since the more complicated levels of feature abstraction which comes with more layers, seems to be missing here, instead all the abstraction is contained in a single layer, if I'm not mistaken? Basically we have one weighted feature vector for the people and one for the movies and we multiply them to see how well they match (bigger total number = better match), which is also part of what NNs do. I guess the bias and activation functions are missing since we just need the score and not a decision boundary?
It looks similar to a neural network with one hidden layer, but the activation functions are missing. These are critical for neural networks in order to be more powerful than matrix multiplication. The standard learning algorithm for neural networks (stochastic gradient descent) should still work, but there are probably faster direct methods from linear algebra to calculate the matrix entries.
And then there is Amazon asking me to buy a second washing machine.
Some products really need a "people usually buy only one at a time" tag. Refrigerators, cars, houses...
Wait a minute... hey Thats basically amazon telling you "Hey your washing machine is about to go out of service wanna buy a new one just in case?"
Amazon uses "customers that bought this also like" and similar simple algorithms. Nothing complicated or sophisticated. They work just fine.
People are terrified at the thought of machines taking over, but actually the algorithms being used in AI and recommendation system are just as inacurate as a friend's recommendation.
🤣🤣🤣🤣
I think it should be noted that for the cold start problem, you'd want to use content filtering to define which users to show those new items to - hence, a combination of content and collaborative filtering is the best approach.
an hybrid approach
These are some solid gold videos on your channel you are putting up for free! Your incredible knowledge, such hardwork and the will to put such amazing educational concepts before the audience is really creating these masterpieces! Absolutely love it! 💗
appreciate this feedback thank you
Art of the Problem is one of the better things on the internet.
Dude , literally watched a zillion videos on YT , nothing comes close to this video. The SVD simplification is on another level!
woo! glad you found it
🤣🤣🤣
I don't understand why this channel isn't more popular. From the beginning it's been great.
thanks for sticking around, have you checked out the new series?
Thanks a lot. It is so simple that I can understand immediately.
glad it helped
Love the video, thank you, great explanation. I wonder if I’m the only one who finds the music a bit...creepy or disturbing....or, maybe that’s intended.
Rewatching, I see that may be my fault for watch at 2x speed.
I feel the same, it's rather distracting
its rly disturbing at any speed...couldnt keep watching it so I was looking for this comment :/
My god, the music put me in a freaking trauma. The explanation was great but I had to turn off the audio. What the heck did the creator think? Since when horror music as background music is a good idea?
awful background music
I love all the artistic choices you guys make when putting these videos together, they have a spacious mood to them. It’s a little sad to read other viewers don’t like the music choice as much, each to their own I guess.
I get that a lot, it's nice to hear from both sides that the mood 'works'
@@ArtOfTheProblem I found it distracting - I thinbk it is simply to high to be "background music". I paused the video several times because I was doing something else in the mean time and I thought another video started or something.
I am a mathematics phd student doing my thesis on low-rank matrix completion, it was great seeing this video show up in my feed! One of my biggest concerns was why we can assume that real life data is part of a low-rank matrix. Even though data being non-random and part of a low-dimensional space is a very reasonable assumption, the issue is that the space of low rank matrices is a very specific low-dimensional space, so why should we assume that our data lies on this specific low dimensional space? The features argument seems fair to me as why it may be reasonable to assume that our data is low-rank.
It's a great question. I'm currently working on a video on manifold hypothesis that gets at this question a little deeper. Would love to hear other's thoughts
That's very interesting, I like the field of prediction/compression/NMF fact a lot. Do you have some references or papers on the subject you mentioned ? How do you define real life data ?
@@lucacaccistani9636 Here is a paper on matrix completion that describes the alternating projection method, and some theoretical results using algebraic geometry: arxiv.org/abs/1711.02151
By real life data I mean data that comes from real life, such as an image or the incomplete user ratings in the netflix problem. Given unknown positions of a matrix, it's easy to find a partially complete matrix which can be completed to a rank r matrix. Just generate a rank r matrix then delete entries in the unknown indices, then we know the resulting incomplete matrix has a rank r completion. If we choose the known entries of a partially complete matrix randomly from a continuous distribution, then often times there will exist a rank r completion with probability 0, or there will be infinitely many rank r completions. However, it is assumed that our data lies on some low dimensional space, so choosing random known entries may not be a good model for real data.
In reverse, doesn't the utility of the approximation (people do seem to like the recommendations) provide some clue that there is a lower-dimensional manifold useful for the purpose of estimating *specifically* the preferences of people regarding movies? Also, if true randomness provides maximum information, and for the most part people's movie preferences, and movies themselves, are far from random, doesn't that also imply that there will be a useful, lower-dimensional manifold? All while keeping in mind that the movies people make and the movies that people watch are reflections of each other: people make movies that other people want to watch, and people only watch what movies people make.
this is what I assume (low-dimensional manifold)@@dmc-au
Interesting video. I downloaded my netflix data once. It is amazing how much data they actually collect . One of the bits they collect is how long you watch each video (whether the actual movie) or the preview clip on the movie selection screen. i.e. If you watch the whole thing, you are somewhat interested in it and "that type of movie".
It also logs what suggestions it gave to you and why that suggestion was given (due to another video) .
It also collects search terms (full / partial) and what results were given to you. i.e. You type "term" and up comes "Terminator 1,2,3" , "The terminal" (totally different type of movie)
how did you download your netflix data?
The name I learnt this as in Uni was Singular Value Decomposition. Same thing, different names. Great video as usual!
My goodness this is such a great video. Just now diving into your channel and loving what you're publishing. Thank you! Just subscribed.
@@patricksweet4104 thanks! Stay tuned
FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again
One of the coolest algorithm that is taking 1/3 of my day in scrolltime.
just posted new video on RL th-cam.com/video/Dov68JsIC4g/w-d-xo.html
The real problem here is traditional recommendation algorithm would recommend to you with things you already have. We need a new algorithm which can analyze and tell you what you may need to get in future, based on historical data.
Very interesting!! I would love to see more videos on this topic. I would guess that the amount of features can be increased in order to have a more accurate result, at the expense of greater computing power and storage requirements.
yes, exactly (same as making a neural network wider)
Not necessarily more accurate though, due to a phenomenon called overfitting: en.wikipedia.org/wiki/Overfitting?wprov=sfla1
I find it funny you used the matrix as the main movie while also explaining matrix and matrices
I AM SO HAPPY i discovered this channel!!!
welcome!
Excellent presentation and visualisation. I recommend this video for Google best award.
Easily and concisely explained. Appreciated
I’m attempting to make a video game recommendation system from a Steam games dataset and your video was super helpful to me!
cool please keep me posted
Really like how the explanation is concise and clear.
How can I like the video a million times...now I can gladly go back those papers with recondite information.
:))
the background music is weird
+++++++
+++
omg BGM is really annoying, felt like it is subconsciously programming me!
+++++++++
Nice video, but background music is a disaster
Damn. This is so concise and perfect.
It's worth noting that the patterns in data don't always mirror reality. People with asthma and copd check earlier with the doctor when they have trouble breathing. So an ann would predict that people with asthma are at lower risk when catching pneumonia and schedule them accordingly.
These problems are frustratingly hard to find. Most approaches to make answers interpretable seem to be about learning a linear local approximation of the machine model, which works reasonably well on convolutional networks.
thanks for this video, i have to build a recommender system for college and this was a really good concise description of how the thing works!
sweet glad this helped you
Hi, how did it go. I'm also in journey to build one
This gold. Thank you so much for making this.
Very enjoyable and clear explanation! Great video
Brit, you posted a video but I didn't see a Patreon billing. Please take my money! You deserve it!
Thank you for this video! Explained a very complex concept for me in a very understandable way.
appreciate the feedback
Very nice video! I'm searching for a while for the correct explanation of those algorithm. Finally I've found it!
excellent welcome to the club!
Great video, love the simple explanations and easy to follow visuals. But why do I feel like I'm about to get jumpscared at any point
@@collinshen3808 horror fan :)
Great work. Very precise and comprehensive. Thank you.
Recommendation engines, a hot CS topic, are desired by business folks for personalization and user engagement in marketing, media, and e-commerce.
Really nice and insightful video.
Please don't stop making videos
Great explanation!
Thanks! stay tuned for more
some legend made this video!
glad this helped you
Very intuitive approach, thanks a lot !!!
Very interesting and clear explanation
Actually it's pretty cool, my thesis is in that area :)
Which ML algo is he talking about in 5:10 to 5:48?
"The things that are recommended to you are based on patterns the machine has observed in other people that are similar to yourself"
It would be interesting to take this to the next step of analysis.. what happens when the recommendations the machine gives start to have an actual tangible affect on the people being given the recommendations?
i would say this is certainly the case
holy cow this is a good video
glad this helped
The bg music feels like being in an horror movie lol
But the video is great
Can you explain me what if there are many, many ways to generate the current data? At that time does it mean that we will have multiple reference table? How do we fix this problem?
Notes for my future revision.
*CONTENT FILTERING*
Based on what someone like, work out what else he/she might like.
*COLLABORATIVE FILTERING*
A user likes things that other users with similar habit also like.
Hi, where did you get the images of netflix about content filtering at 4:16 in the video? I need it for my dissertation as a talking point that Netflix was a content filtered recommender at one point, thanks!
nice video
Lemmino if you find any more channels like this. These days prediction has made youtube channel subscription less important. However, I use them just as an A-list. Btw I subscribed.
Is there any product we can filter out the background music it doesn't really fit the topic and is really distracting
Hi, for your explanation on collaborative filtering are you explaining from the model-based approach, I'm a little bit confused between the memory and model-based approach fro CBF
I'm Making a movie recommendation system for my final year project any idea on how I can get started?
Very good and funny videos bring a great sense of entertainment!
Great explaination !
Thankyou
I'd love to meet the people with the most similar movie taste to me.
And you found - NONE!
Probably ppl you're already friends with
How is the preference data matrix factorized?
3:47, Sir Can I ask? where did you get the "By diving the all values by 8?". Can I know where did you get the 8? thank you sir
I was asking myself the same question but I think it's smth like:
you take the highest value (in this case 28) and you know that you need a value that is less or equal to 4, so you solve the inéquation 28/x
that's just to normalize the data, so you take the largest
@@ArtOfTheProblem Couldn't fully understand - the largest what??
Great story 👍🏻
it was just awesome
glad you enjoyed sub for more
Great work 👏👏
awesome video
This was really cool
This reminds me of factor analysis...
good job, amazing video
How long a good filtering system would take to be built? Let’s say 10,000 people based on 100 data points.
Great video!
glad you found this helpful
Very clear video!
this was pretty good
Fantastic job again! :)
youtube knew I was gonna like this video u say?
thank you so much
Nice!
would love if you could help share my newest video: th-cam.com/video/5EcQ1IcEMFQ/w-d-xo.html
That was really helpful thanks
Answer me, which one is true netflix using deep learning or machine learning for algorithm?
The background music is not working for me.
Which ML algo is used in 5:10 to 5:48 can you please name it, it will be very helpful.
Thanks for sharing this amazing work.
interestingly enough, you can do the most simple thing here which is repeatedly guess and keep what works.
what is the background music for?
thank you
great vid!!! thank you
what a nice video! sooo useful :)
very nice video, but background music disturbing the original content, sorry its a bit annoying, thank you for the video.
great video but the background music sounds like it comes from a horror film
Do we use classifiers in collaborative filtering?
Not really, no. There is always some sort of classifying being done but in this case not in the way you mean it, I think. In this approach we decide (by hand or algorithmically, but always beforehand) that we are going to reduce the data space to a smaller space of dimension k. Choosing k is often difficult. Then, the main algorithm converges to the optimal representation, that is, the space of dimension k that represents the best the data space. You can look up NMF factorization, k-means clustering or even PCA (the last one doesn't has a k and tends to over-fit, in the end you have the same problem of choosing when you stop, hence choosing k..)
But how to know how many latent features to use? There must bea better way than trial and error.
what's happening to you tube, why do my videos keep stopping suddenly then starting back. but when i sign into another account using a vpn that doesn't happen, and I'm watching the same video.
nice vid i love it
Is this anyway related to SVD? Nice video!
what is background song name?
Content Filtering still is required for Collaborative Filtering to work.
i don't see any use of recommendation system instead of online movie and online product? can anyone give me some others example
the background music is quite annoying
Can someone help me: how are wo normalising the data? At 3:48?
this is a great video but why is the music so scary? T.T
Loved the explanation but song selection is really weird
Recommendation algorithms don't have enough actually useful data. What do I mean by that? First, they recommend things based on what you have watched, assuming you are interested in that topic. For example, I may click on a random video about a bass player or bass guitar style.....then my feed is full of bass guitar channels. NO, I was curious about the video, but I don't want bass channels. Also, they don't take a good survey of the person's tastes.
for example, Netflix could have a customer take a "what do you like" survey, 4 or 5 pages of 20 - 30 various movies, shows, etc, have the customer pick 8-10 on each page. Essentially sprinkle in enough variety on each page to get a more accurate read on their tastes.
Netflix suggestions are usually 50% wrong for me. I would love if they would allow a "don't recommend" option along with like, don't like, etc so it never shows up again in the normal lists. Grapes of wrath and Annie Hall are NOT on any list I would ever create. ahhahaha. would also be great if we could exclude specific actors, directors, etc to keep them from the list as well, considering I can't stand Will Ferrell.
Love the vid but that ambient noise is mildly annoying ngl
I didn't know you were Canadian
So I guess all of you are similar to myself because here we are.
STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel"
SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1
WATCH AI series: th-cam.com/play/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ.html
Well it's the same "reversal" of programming logic
- non NN: you give the computer some input and an algorithm and the computer will give you the output
- NN: you give the computer some input and output and the computer will give you the algorithm (the neural network)
(this is only true for training of course, at inference time you will use the input and the learned algorithm to get the output again, but the learning part is kind of like the "solving the problem" part)
Now that I think about it, mentally I originally compared this to deep learning (neural network with more than 3 layers), but collaborative filtering seems more like a single layer neural network since the more complicated levels of feature abstraction which comes with more layers, seems to be missing here, instead all the abstraction is contained in a single layer, if I'm not mistaken?
Basically we have one weighted feature vector for the people and one for the movies and we multiply them to see how well they match (bigger total number = better match), which is also part of what NNs do.
I guess the bias and activation functions are missing since we just need the score and not a decision boundary?
It looks similar to a neural network with one hidden layer, but the activation functions are missing. These are critical for neural networks in order to be more powerful than matrix multiplication. The standard learning algorithm for neural networks (stochastic gradient descent) should still work, but there are probably faster direct methods from linear algebra to calculate the matrix entries.