Thanks to good people like you, we are able to learn a lot of useful skills at a free cost. This is the best tutorial so far that I have watched on DBSCAN
I wish i could find a word to express my gratitude to you. You are just amazing. you have clear the many concept and I learned a lot from you. Thank you so much and god bless you. Plz keep it up and upload more videos. Looking forward to see more videos like HDBSCAN and more. God bless you.
Thanks Greg, that was awesome. Explanation on the spot. I loved the part about showing how to find a *really good* model that went beyond the typical 10 min how-to video. I am new to ML coming from a research background (physics) and often I am a bit worried about the mindset "ML is easy, just watch this video, implement the algorithm and you are done". So, again, really great job, thanks.
Very nice! Thank you! :) Grid search is not optimal for a highly non-linear models. Scipy has a great optimization toolbox with global simplex methods like "shgo", highly suitable for a non-linear global optimization tasks. Easy to use as well. :)
Wow, thanks Mike! I'll be sure to check these out, that's great to know. I still found that it worked pretty well, but I guess the dataset wasn't super massive. Very helpful for me and others, thank you.
Hello, thanks for the video. I have a question. I have data consisting of 30,000 data points and these points have 3 features. I would like to calculate the 3D joint probability density of these data and plot a 3D scatter plot, where the x,y, and z axes correspond to these features, coloring based on probability densities. Although I have been looking for any tool/library for that, I could not find any way to do it. Do you have any suggestions for that? I really appreciate any comment. Thanks a lot!
Hi Greg, Your housing dataset was having many features, but you only took 2 feature like long, latt(if I understood it clearly) for clustering. You have other features also, can we use all other features too for making the clusters. Please help me.
hey there, your video is absolutely good but i just want to ask why when u plotted u took only the 2 columns from your dataset? can we make clusters of all 12 columns that u had in your dataset and visualize those clusters, suggest me if there is any such algorithm available!
Hello Greg! Thank you for the valuable in depth explanation. When having GPS data where time is also relevant for clustering points, how can that be used with DBSCAN? Or is there any other algorithm that suits better the problem?
that dataset should be chosen for dbscan analysis which contains meaningful clusters, which rather does not seem to be the case with california housing dataset :)
Hello! Thanks so much for the tutorial! But I have a problem, I tried to do it with my data, it has a lot of columns, I can do the search of epsilon and min samples with all the columns? Or it has to be with 2? Because the error is: operands could not be broadcast together with shapes (33026,) (6,) I hope someone could help me, thanks
Unlike kmeans there is no option to predict new values with dbscan in sklearn. There is only a fit_predict() which will just create new clusters. why is that? Is there a way we could predict in which cluster the new datapoints will go to
People are very divided on this feature. Technically, there should not be any prediction for a clustering model. Others (including me honestly) think that you might as well have a prediction function.
Hi Greg, I am new to programming (some knowledge of MatLab I have). I started with python for everybody specialization and now I am doing google data analytics professional certificate course also. after this I am planning to study ML and deeplearning specialization from andrew ng. is this knowledge enough to land in a ML Engineer job? or any other suggestion (Note: I am not from computer science background)
The information will be tremendously valuable, and is essentially a requirement. I can't promise you will land a job after it, and there's certainly more to learn on the coding front, but this is excellent and necessary progress.
Take my courses at mlnow.ai/!
You literately wrote the function I needed, thank you Greg!
You're very welcome!
Danke!
Thanks to good people like you, we are able to learn a lot of useful skills at a free cost. This is the best tutorial so far that I have watched on DBSCAN
So kind and really glad to hear it!
I wish i could find a word to express my gratitude to you. You are just amazing. you have clear the many concept and I learned a lot from you. Thank you so much and god bless you. Plz keep it up and upload more videos. Looking forward to see more videos like HDBSCAN and more. God bless you.
That definitely sends the right message! Thank you:))
@@GregHogg hi i want to apply dbscan on images to generate the clusters on the basis of image pixles densities can you help me in this
Thanks Greg, that was awesome. Explanation on the spot. I loved the part about showing how to find a *really good* model that went beyond the typical 10 min how-to video. I am new to ML coming from a research background (physics) and often I am a bit worried about the mindset "ML is easy, just watch this video, implement the algorithm and you are done". So, again, really great job, thanks.
Hmm yeah I totally get that. You're very welcome and thanks so much for the kind words!!
Great video, sure this is the most well explained I have seen on the topic so far
Glad to hear it, Ayenew!
That was amazing!!!!! thanks for your sharing! brilliant brain!
Haha you're very welcome 😁
Thanks Greg.for more than 3 dimensions using PCA to transform to 2 features and visualising will help
Thank you for showing us how to optimize a good dbscan model
My pleasure!
Thank you for the great gob! Very easy to understand!
You're very welcome 😁
Thanks a lot mate, It's really insightful
Very helpful thanks!!
You're very welcome!
Very nice! Thank you! :)
Grid search is not optimal for a highly non-linear models. Scipy has a great optimization toolbox with global simplex methods like "shgo", highly suitable for a non-linear global optimization tasks.
Easy to use as well. :)
Wow, thanks Mike! I'll be sure to check these out, that's great to know. I still found that it worked pretty well, but I guess the dataset wasn't super massive. Very helpful for me and others, thank you.
Hello, thanks for the video. I have a question. I have data consisting of 30,000 data points and these points have 3 features. I would like to calculate the 3D joint probability density of these data and plot a 3D scatter plot, where the x,y, and z axes correspond to these features, coloring based on probability densities. Although I have been looking for any tool/library for that, I could not find any way to do it. Do you have any suggestions for that? I really appreciate any comment. Thanks a lot!
Great video, the optimisation guide is really helpful too for a project I am working on. Thanks!
That's super great to hear!
Hi Greg, Your housing dataset was having many features, but you only took 2 feature like long, latt(if I understood it clearly) for clustering. You have other features also, can we use all other features too for making the clusters. Please help me.
hey there, your video is absolutely good but i just want to ask why when u plotted u took only the 2 columns from your dataset? can we make clusters of all 12 columns that u had in your dataset and visualize those clusters, suggest me if there is any such algorithm available!
hello Greg , That was super helpful , but how can i draw an elbow on the same graph
thank you
Hello Greg! Thank you for the valuable in depth explanation. When having GPS data where time is also relevant for clustering points, how can that be used with DBSCAN? Or is there any other algorithm that suits better the problem?
Great Job, Thanks.
You're very welcome :)
Can we use a foundational model like OpenAI embeddings api for the text data, and then use DBSCAN clustering for Recommendation purposes?
great video
that dataset should be chosen for dbscan analysis which contains meaningful clusters, which rather does not seem to be the case with california housing dataset :)
Where i can take this dataset?
Sir, while using grid search for DBSCAN is it necessary to use cross-validation to prevent overfitting?
Hello! Thanks so much for the tutorial! But I have a problem, I tried to do it with my data, it has a lot of columns, I can do the search of epsilon and min samples with all the columns? Or it has to be with 2? Because the error is: operands could not be broadcast together with shapes (33026,) (6,)
I hope someone could help me, thanks
TYSM Greg :)
Very welcome!
Hii Greg thanks a lot for this awesome video
could you please make same content for HDBSCAN please
Unlike kmeans there is no option to predict new values with dbscan in sklearn. There is only a fit_predict() which will just create new clusters. why is that? Is there a way we could predict in which cluster the new datapoints will go to
People are very divided on this feature. Technically, there should not be any prediction for a clustering model. Others (including me honestly) think that you might as well have a prediction function.
Thanks so much
Sir can you make a video about any of meta-heuristic technique for clustering
I'll have to look into this.
@@GregHogg looking forward to it
Thank you.
Can you make video on spectral cluster , affinity propagation and BIRCH?
At some point, absolutely.
DBSCAN literally takes forever to run for a relatively large dataset and multiple features. Is there any method to speed up the process ?
Hi Greg,
I am new to programming (some knowledge of MatLab I have).
I started with python for everybody specialization and now I am doing google data analytics professional certificate course also. after this I am planning to study ML and deeplearning specialization from andrew ng. is this knowledge enough to land in a ML Engineer job? or any other suggestion
(Note: I am not from computer science background)
The information will be tremendously valuable, and is essentially a requirement. I can't promise you will land a job after it, and there's certainly more to learn on the coding front, but this is excellent and necessary progress.
@@GregHogg thanks Greg for your reply
@@gopinathk5094 Best of luck 😃
Hii
I need a help