DBSCAN Algorithm | Machine Learning with Scikit-Learn Python

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2024

ความคิดเห็น • 47

  • @crazyfootball2271
    @crazyfootball2271 3 ปีที่แล้ว

    Very easy to understand for 1st timers. Great work. Appreciated.

  • @anushamv3190
    @anushamv3190 3 ปีที่แล้ว +1

    Hello sir,
    Which algorithm works well for customer segmentation wrt Recency, Frequency, Monetory?
    And is necessary to apply all the algorithms that is Kmeans, Dbscan, hier to the dataset and then come yo conclusion.

  • @NhuNguyen-gl3dv
    @NhuNguyen-gl3dv 4 ปีที่แล้ว

    Excellent video. Very well explained. Thank you so much.

  • @zakariaghalmane1547
    @zakariaghalmane1547 4 ปีที่แล้ว

    Thank you for this very useful video

  • @arash_mehrabi
    @arash_mehrabi 3 ปีที่แล้ว

    nice, clear explanation, thank you.

  • @elenatagliabue6625
    @elenatagliabue6625 3 ปีที่แล้ว +2

    Great! One question: what do you mean when you write "dist i=dist of the 5th neighbor of the ith data point"? What is the neighbor in this case? Thank you

    • @NormalizedNerd
      @NormalizedNerd  3 ปีที่แล้ว

      dist = an array of n elements
      dist[i] stores the distance of the 5th nearest datapoint from i th data point
      n = number of data points

  • @poojachindarkar1207
    @poojachindarkar1207 4 ปีที่แล้ว

    you made that easy! glad that i found you :)

  • @laurynasgrusas8755
    @laurynasgrusas8755 4 ปีที่แล้ว

    This was very helpful. Thank you!

  • @kannavjiya_raja
    @kannavjiya_raja 3 ปีที่แล้ว

    Great! Can you please provide the more detail explanation of DBSCAN algorithm

  • @Slypie2112
    @Slypie2112 3 ปีที่แล้ว

    How do we specify out the exact values of the outliers from the dataset from this DBSCAN cluster? Thank you

    • @NormalizedNerd
      @NormalizedNerd  3 ปีที่แล้ว

      Exact values of the outliers...meaning?

  • @elvykamunyokomanunebo1441
    @elvykamunyokomanunebo1441 2 ปีที่แล้ว

    Hello Normalizer, I am wondering :
    If DBSCAN doesn't handle higher dimensionality very well, does standardizing improve performance if there is a moderate degree of correlation between features/ dimensions?

  • @mathavraj9662
    @mathavraj9662 4 ปีที่แล้ว +1

    By 5th neighbour you mean the 5th radially farthest point from ith point? What if many points are lying in the 5th position

    • @NormalizedNerd
      @NormalizedNerd  4 ปีที่แล้ว +1

      A point can have any number of equidistant neighbors. The algorithm just checks how many points are inside the circle.

  • @MiyaBhai-dj4on
    @MiyaBhai-dj4on 3 ปีที่แล้ว +3

    Please. Provide us code to copy it

  • @stonecastle858
    @stonecastle858 4 ปีที่แล้ว +1

    z surely can't refer to neighbours only, it must also include the point itself?

    • @NormalizedNerd
      @NormalizedNerd  4 ปีที่แล้ว +1

      Yes, z includes the point itself. (Sorry for the late reply)

  • @saiakhileshande8486
    @saiakhileshande8486 2 ปีที่แล้ว

    Thank you for the video with a clear explanation. Could you also show how to find optimal z and epsilon in sklearn?

  • @arijitRC473
    @arijitRC473 4 ปีที่แล้ว

    Well explained content!!

  • @haneulkim4902
    @haneulkim4902 4 ปีที่แล้ว

    Thanks for great video! I have two questions that I want to ask:
    1. You said DBSCAN performs poorly for high dimensional data, how many dimension are considered high?
    2. Why is it bad for high dimensional data?

    • @NormalizedNerd
      @NormalizedNerd  4 ปีที่แล้ว +1

      1. That's a very subjective question. For some datasets it's 100 for others it might be 1000. It depends on the distribution of the data.
      2. Because we are using Euclidean distance to find the neighborhood points. Euclidean distance is bad for searching in higher dimensions because it searches a tiny percentage of volume compared to circumscribing hypercube!

    • @haneulkim4902
      @haneulkim4902 4 ปีที่แล้ว

      ​@@NormalizedNerd
      Thanks for answering!
      1. Distribution of each feature? Can't we just normalize all features?

  • @saylik1094
    @saylik1094 3 ปีที่แล้ว

    Very nice explanation. Thank you!!
    Can you please video on HDBSCAN?

  • @rezamahendra8418
    @rezamahendra8418 3 ปีที่แล้ว

    How can we input the excel or csv data while using this algorithm?

    • @NormalizedNerd
      @NormalizedNerd  3 ปีที่แล้ว

      Pretty easy...
      df = read.csv("path_to_csv_file.csv")
      # then use iloc to select columns for features and target variables and put them in X and Y

  • @dragoneagle11
    @dragoneagle11 4 ปีที่แล้ว

    Great video! Is there any function built into scikit that can plot the clusters like the function you have in this video? Your show_clusters function

    • @NormalizedNerd
      @NormalizedNerd  4 ปีที่แล้ว +1

      IDK if scikit learn can do that but you can do a scatter plot using seaborn to indicate the clusters.

  • @cruzab3153
    @cruzab3153 3 ปีที่แล้ว

    Very useful....So I have 1 doubt...Assuming we created the clusters...how do we create a buffer or outer polygon for those cluster??...

    • @NormalizedNerd
      @NormalizedNerd  3 ปีที่แล้ว

      Thanks!
      You need something called convex hull.

    • @cruzab3153
      @cruzab3153 3 ปีที่แล้ว

      @@NormalizedNerd thanks man... that's everything I need ...

    • @cruzab3153
      @cruzab3153 3 ปีที่แล้ว

      OMG man it's working.....I have been searching in the wrong direction for over 1 week....this one word opened doors to all my answers😭😭... thanks again man....

    • @NormalizedNerd
      @NormalizedNerd  3 ปีที่แล้ว +1

      @@cruzab3153 Haha...Yeah it happens. Happy to help :D

  • @fitrianinasir1321
    @fitrianinasir1321 3 ปีที่แล้ว

    Thank u so much, what a great explanation! I have a question, can we use PCA before doing clustering with DBSCAN? If yes, which dimension should I use? before PCA (in this case I have 30 dimensions), or after PCA with 3 dimensions?

    • @NormalizedNerd
      @NormalizedNerd  3 ปีที่แล้ว

      Yes, you can try to reduce the dimension using PCA and then cluster using DBSCAN.

    • @fitrianinasir1321
      @fitrianinasir1321 3 ปีที่แล้ว

      @@NormalizedNerd then for the MinPts, in case I will use PCA Dataframe to fit in DBSCAN Algorithm.. which one should I use? MinPts = 2*30 - 1 = 59 (original number of features) or MinPts = 2*3 - 1 (PCA features) ?? (refers to the heuristic approach by the inventor of DBSCAN Algorithm, Martin Ester 1996)

  • @Ajitshukla07
    @Ajitshukla07 4 ปีที่แล้ว

    Very well experienced, can we get more usecases for DBScan for better understanding .

    • @NormalizedNerd
      @NormalizedNerd  4 ปีที่แล้ว

      give this a read: datascience.stackexchange.com/questions/10063/for-which-real-world-data-sets-does-dbscan-surpass-k-means

  • @pratyakshmathur2334
    @pratyakshmathur2334 3 ปีที่แล้ว

    Can you please do it with an image