Mam.. If I have images and their feature descriptors from a convolutional neural network like RESNET, what distance metric would be appropriate to use and why? It is not probability distribution, but vectors for each image.
Great question! When working with feature vectors from a convolutional neural network (CNN) like RESNET, the Euclidean distance is a common choice. It measures the straight-line distance between two points in space, which is suitable for comparing vectors. However, for high-dimensional data like image feature vectors, you might also consider using other distance metrics that account for the curse of dimensionality. Cosine similarity is one such metric. It measures the cosine of the angle between two vectors and is robust in high-dimensional spaces. Experimenting with different distance metrics and observing their impact on the performance of your K Nearest Neighbors (KNN) algorithm is a good practice. It's essential to choose a metric that aligns with the characteristics of your data and the goals of your analysis. Happy coding! 📊🤖✨
@@AnalyticsvidhyaThank you mam.. I had one more doubt... In the video we do minmax scaling... Should that same apply when the distance metric is cosine or other normalization like zscore be used..
Hi Ankit, it would help if you specify the specific time in the video, where you got this query. Anyway, I think your question is in the part, where Pythagoras Theorem is being referred for calculating the Euclidean Distance. Here distance a is along the Y axis. And from coordinates of point A (P1,P2) and B (Q1,Q2), if I subtract the two Y-coordinates "P2" and "Q2", we get the distance a. Likewise, for b. Hope, this answers your question. 👍
Mam.. If I have images and their feature descriptors from a convolutional neural network like RESNET, what distance metric would be appropriate to use and why? It is not probability distribution, but vectors for each image.
Great question! When working with feature vectors from a convolutional neural network (CNN) like RESNET, the Euclidean distance is a common choice. It measures the straight-line distance between two points in space, which is suitable for comparing vectors.
However, for high-dimensional data like image feature vectors, you might also consider using other distance metrics that account for the curse of dimensionality. Cosine similarity is one such metric. It measures the cosine of the angle between two vectors and is robust in high-dimensional spaces.
Experimenting with different distance metrics and observing their impact on the performance of your K Nearest Neighbors (KNN) algorithm is a good practice. It's essential to choose a metric that aligns with the characteristics of your data and the goals of your analysis. Happy coding! 📊🤖✨
@@AnalyticsvidhyaThank you mam.. I had one more doubt...
In the video we do minmax scaling... Should that same apply when the distance metric is cosine or other normalization like zscore be used..
mam can you tell me how you get p1 -q1 value and p2 - q2 how value you get in graph please explain ?
all is good explainetion
Hi Ankit, it would help if you specify the specific time in the video, where you got this query.
Anyway, I think your question is in the part, where Pythagoras Theorem is being referred for calculating the Euclidean Distance.
Here distance a is along the Y axis. And from coordinates of point A (P1,P2) and B (Q1,Q2), if I subtract the two Y-coordinates "P2" and "Q2", we get the distance a. Likewise, for b.
Hope, this answers your question. 👍