HDBSCAN, Fast Density Based Clustering, the How and the Why - John Healy

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.พ. 2025

ความคิดเห็น • 40

  • @benhurrodriguez1807
    @benhurrodriguez1807 9 หลายเดือนก่อน +12

    Presentation Skills: 100000/10

  • @RajatSaxena35
    @RajatSaxena35 2 ปีที่แล้ว +16

    Presentation Skills: 10/10

  • @reocam8918
    @reocam8918 3 ปีที่แล้ว +4

    Nice presentation, I see 200% confidence and eloquence

  • @alexanderdevaux661
    @alexanderdevaux661 3 ปีที่แล้ว +12

    this is exactly what I have been looking for! great presentation.

  • @-beee-
    @-beee- ปีที่แล้ว +1

    Wow, what a great talk! Love the intuitive explanations and visuals. Super helpful. Thank you!

  • @21rufus21
    @21rufus21 ปีที่แล้ว +3

    Absolutely fantastic presentation, thank you

  • @vunder8737
    @vunder8737 4 หลายเดือนก่อน

    This truly was a wonderful presenter, would love to listen to him on other presentations

  • @MrRaisin56
    @MrRaisin56 2 ปีที่แล้ว +3

    Wow I love the enthusiasm! It really makes it so much nicer to watch. Very insightful as well thank you very much!

  • @jiayangcheng
    @jiayangcheng 4 หลายเดือนก่อน

    Love the presentation. Great work!

  • @alaaelhadba7310
    @alaaelhadba7310 ปีที่แล้ว +1

    Thank you so much. It was exactly what I was looking for 🎉🎉

  • @hannahnelson4569
    @hannahnelson4569 8 หลายเดือนก่อน +1

    A very impressive presentation and algorithm! Thank you for teaching all this!

  • @pankajgoikar4158
    @pankajgoikar4158 2 ปีที่แล้ว +2

    Awesome presentation.

  • @opelfrost
    @opelfrost 2 หลายเดือนก่อน

    thanks a lot, learn a lot from this presentation

  • @vampierkill
    @vampierkill 2 ปีที่แล้ว +4

    Sorry has to comment because of the kiiiiiiick ass animation! Brilliant.

  • @honey-py9pj
    @honey-py9pj 2 ปีที่แล้ว +1

    what an amazing speaker!

  • @edwardmalthouse973
    @edwardmalthouse973 หลายเดือนก่อน

    Thank you for your presentation. It was very helpful. I'm not sure about the claim that k-means requires small amounts of data. I believe K-means is O(n) (assuming a small number of dimensions and iterations) and I have used on very large data sets without problems.
    I would also like to respectfully push back on the spherical cow comment. While it certainly depends on the domain, in social science and business applications with large, noisy data sets, the spherical, or at least elliptical, assumption often works very well, and produces better assumptions than the more nonparametric algorithms. It's easy to construct mathematical examples with odd-shaped clusters, but I've not encountered them in practice, although it could just be due to the domains I work in.

  • @danaizenberg2402
    @danaizenberg2402 ปีที่แล้ว +2

    great talk

  • @sushilkhadka-iu3gf
    @sushilkhadka-iu3gf ปีที่แล้ว +1

    that was a great talk!

  • @valeryzuev3957
    @valeryzuev3957 3 ปีที่แล้ว +4

    15:30 there might be a misprint in the formula: d(X_i, X_j), not d(X_j, X_j)

  • @nihshrey
    @nihshrey ปีที่แล้ว +1

    Amazing

  • @TrixieFromSanFran
    @TrixieFromSanFran 2 ปีที่แล้ว

    The coloring of the tree at 14:00 is needlessly confusing. See figure 3a in their paper McInnes & Healy 2017 to clarify things

  • @maximillianweil2672
    @maximillianweil2672 ปีที่แล้ว +3

    Thank you for the super interesting talk! I was wondering if you have worked with the new HDBSCAN integrated in sklearn 1.3.0? Is it possible to draw the cluster tree with this implementation?

  • @ahmedayman2380
    @ahmedayman2380 ปีที่แล้ว +2

    can someone tell me about his linkedin or his full name please or how to connect to him

    • @RoulDukeGonzo
      @RoulDukeGonzo 7 หลายเดือนก่อน

      0:24 name and email

  • @RoulDukeGonzo
    @RoulDukeGonzo 7 หลายเดือนก่อน

    Any idea why the GPU version of this method can't take a pre-computed distance matrix?

    • @scatteredvideos1
      @scatteredvideos1 5 หลายเดือนก่อน

      There is a RAPIDS version of HDBScan. I'm personally struggling to get dependencies working together but it does exist

    • @RoulDukeGonzo
      @RoulDukeGonzo 5 หลายเดือนก่อน

      @@scatteredvideos1 I think that's what I used... Anyway, I'll give it another go.

    • @scatteredvideos1
      @scatteredvideos1 5 หลายเดือนก่อน

      To be honest the speed up really isn't even that great, it's only partially parallelized with GPUs. It's better just to reduce the dimensionality of your data, PCA to 95% of explained variance, and then UMAP to 10 or so dims, then cluster using HDBSCAN. I've found doing a grid search over a bunch of different HDBscan parameters can be helpful if you aren't getting perfect clustering.

    • @scatteredvideos1
      @scatteredvideos1 5 หลายเดือนก่อน +1

      With 10 UMAP dims and 184k data points my cluster is done in about 7 s on a Google colab high ram CPU instance

    • @RoulDukeGonzo
      @RoulDukeGonzo 5 หลายเดือนก่อน +1

      @@scatteredvideos1 I haven't tried GPU accelerated HDBSCAN, but for other clustering algorithms, the difference between CPU and GPU is night and day (so I was expecting it to be so here). I'm clustering embedding data from LLMs so it's extremely dense and uncorrelated, so PCA hasn't been much use (at least in my hands).

  • @pahulhallan
    @pahulhallan 2 ปีที่แล้ว

    27:50 Installation

  • @0MVR_0
    @0MVR_0 8 หลายเดือนก่อน

    clustering is highly driven by the formatting of how the data relates to itself
    and is near impossible to accomplish using a single method of approach.

    • @RoulDukeGonzo
      @RoulDukeGonzo 7 หลายเดือนก่อน

      Agree, but in practical terms, where do you start?

    • @0MVR_0
      @0MVR_0 7 หลายเดือนก่อน +1

      @@RoulDukeGonzo An intimate descriptive knowledge of the data is recommended.

  • @laughingsaeed
    @laughingsaeed 3 หลายเดือนก่อน

    I don't why he's talking so fast! Is someone after him and he needs to run away?!