How Swarms Solve Impossible Problems

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ธ.ค. 2024

ความคิดเห็น • 69

  • @matveyshishov
    @matveyshishov 2 หลายเดือนก่อน +30

    Hey, I've been working on this for several years now, one of my fav reseach areas, which I hope to see becoming applied really soon. Glad to see your interest!

    • @TheDevildragon95
      @TheDevildragon95 2 หลายเดือนก่อน

      Hi, are you working on the problem or swarm algorithms?

    • @jeffreychandler8418
      @jeffreychandler8418 2 หลายเดือนก่อน

      I'm curious about your general experience/takes about these.

  • @BohonChina
    @BohonChina 2 หลายเดือนก่อน +127

    this is so called ant colony optimization (ACO) or Particle swarm optimization (PSO) in the computational intelligence, the artificial neurl network and deep learning are also belong to the computional intelligence, but ACO and PSO are not popular any more.

    • @perspective2209
      @perspective2209 2 หลายเดือนก่อน +14

      It's a really interesting topic, and one more best example of swarm intelligence is slime mold, which is used to design most efficient subway station in tokyo

    • @mrguiltyfool
      @mrguiltyfool 2 หลายเดือนก่อน +6

      Is there any reason why it is not popular anymore?

    • @w花b
      @w花b 2 หลายเดือนก่อน +1

      ​@@mrguiltyfool Probably like most techniques that aren't popular in their respective fields

    • @Entropy67
      @Entropy67 2 หลายเดือนก่อน +15

      ​@@mrguiltyfoolits worse then other methods, we can literally approximate any behaviour with enough data using more modern (and relatively efficient) techniques (aka back propogation). This one is reliant on the granularity of the swarm (inefficient). Its like using apples to describe addition, we can do it without the apples now lol. Well thats somewhat of an unfair comparison, there are still use cases for it (lack of data, natural abstraction for a problem) but its not popular.

    • @hypophalangial
      @hypophalangial 2 หลายเดือนก่อน +16

      Training neural networks involves solving an optimization problem. Early neural network researchers chose gradient descent to solve this optimization problem and everyone else since then has continued to use gradient descent because it’s very simple and it gets the job done. The optimization problem in neural networks has many local minima that are all roughly equally as good as each other, so there hasn’t been any reason to adopt more complicated or robust optimization techniques like swarms. Neural networks don’t need to find the best solution. Any local minimum will do.

  • @davidetl8241
    @davidetl8241 2 หลายเดือนก่อน +33

    crazy good content and animations. thank you

  • @juanmacias5922
    @juanmacias5922 2 หลายเดือนก่อน +11

    Such an awesome concept, thanks for sharing!

  • @Sjoerd-gk3wr
    @Sjoerd-gk3wr 2 หลายเดือนก่อน +17

    Great video. (Kinda just commenting to boost your videos in the algorithm, because these videos deserve more views)

  • @JJ-fr2ki
    @JJ-fr2ki 2 หลายเดือนก่อน +1

    Excellent. As a an old philosopher of this business you managed to dodge every common conceptual error in this business which swarms with misunderstandings.

  • @anon_y_mousse
    @anon_y_mousse 2 หลายเดือนก่อน +1

    That was a much more interesting way to relate the problem than I've seen done before. I also like the nature shots as a lead up. Really breaks the tedium of an otherwise boring subject.

  • @hamadrehman853
    @hamadrehman853 2 หลายเดือนก่อน +2

    without a doubt one of the best channels on youtube. This is premium quality content.

    • @LeonardNemoy
      @LeonardNemoy 2 หลายเดือนก่อน

      are you kidding me? I was hoping for some real world examples, instead I got a tedious logical explanation with no soul. SO LAME. THANK YOU FOR YOUR HARD WORK.

  • @airbound1779
    @airbound1779 2 หลายเดือนก่อน

    When I sign up to Brilliant I’ll use your link, you’ve earned it

  • @a3r797
    @a3r797 2 หลายเดือนก่อน +2

    How does this only have 2000 views? This is such a high quality video.

  • @demonslime
    @demonslime 2 หลายเดือนก่อน +1

    This sounds like doing gradient descent multiple times just with extra steps

  • @kingki1953
    @kingki1953 2 หลายเดือนก่อน

    You explain better than my lecturer. Thanks 🎉

  • @Titouan_Jaussan
    @Titouan_Jaussan 2 หลายเดือนก่อน +8

    Still waiting to know what color theme he is using, it looks incredible

    • @b001
      @b001  2 หลายเดือนก่อน +4

      Synthwave’84, no glow

    • @Titouan_Jaussan
      @Titouan_Jaussan 2 หลายเดือนก่อน +2

      @@b001 Thank you so much !!! and congrats on the video btw, such a great topic and great animations, keep going !!

  • @wanfuse
    @wanfuse 2 หลายเดือนก่อน

    the reason it is so interesting is not because it is better than Adam, GD, etc. it is so interesting because it massively parallelizes the search with "low energy" expenditure. There are much more efficient algorithms fir high dimensional space though, far better than Adam or GD

  • @talkingbirb2808
    @talkingbirb2808 2 หลายเดือนก่อน

    Somehow it reminded me of grid search, random search and Bayesian search

  • @popel_
    @popel_ 2 หลายเดือนก่อน

    BOOL FINALLY DROPPED!

  • @manfyegoh
    @manfyegoh 2 หลายเดือนก่อน +5

    sound some similarity like KNN calculation

  • @sabbirhossan3499
    @sabbirhossan3499 2 หลายเดือนก่อน

    Great video, it's makes hard things simple!

  • @AG-ur1lj
    @AG-ur1lj 2 หลายเดือนก่อน

    Really hoping this video fully addresses its title, cuz I spent all the time learning to implement this sh** and I’m struggling to find applications other than bragging about how 1337 I am

  • @StevenHokins
    @StevenHokins 2 หลายเดือนก่อน

    Cool video, thank you ❤

  • @Djellowman
    @Djellowman 2 หลายเดือนก่อน +3

    The inertia + memory vector makes no sense. Not only would they cancel each other out, it also won't make an agent revisit the original area. It just makes agents slower to converge on the global best position.

    • @babsNumber2
      @babsNumber2 2 หลายเดือนก่อน +1

      He mentions that those vectors can have different weights. So you can tweak the algorithm to favor either the inertia, best social score or the memory. So there are versions of the algorithm where the inertia and memory vectors don't cancel out.

    • @blu_skyu
      @blu_skyu 2 หลายเดือนก่อน +1

      They only cancel out on the first step away from the personal best. If the particle has travelled away since then, the inertia and memory vectors can have different angles too.

  • @DanielPham831
    @DanielPham831 2 หลายเดือนก่อน

    Hi, What did you use to make the video, or animation with in this video ?

  • @aracoixo3288
    @aracoixo3288 2 หลายเดือนก่อน +1

    Swarm School

  • @4thpdespanolo
    @4thpdespanolo 2 หลายเดือนก่อน +1

    Swarm optimization is unfortunately not feasible for very large search spaces

  • @TugasTugas-e5w
    @TugasTugas-e5w 6 วันที่ผ่านมา

    Brilliant

  • @roguelegend4945
    @roguelegend4945 2 หลายเดือนก่อน

    oh i get it, pascals triangle numbers represent 2/3 = 6666.... but it also represents 1/2 of a circumference, but it also represents a whole number = one= 1 universe... yeah i know this is beyond crazy math scientists, but it is accurate...

  • @PowerGumby
    @PowerGumby 2 หลายเดือนก่อน

    can swarms solve the problem of odd perfect numbers? (OPNs)

  • @rajeshpoddar5763
    @rajeshpoddar5763 2 หลายเดือนก่อน

    what bgm you used ?

  • @andrestorres7343
    @andrestorres7343 2 หลายเดือนก่อน

    how does this method compare to something like a genetic algorithm?
    Under what assumptions would this outperform (converge faster) than a genetic algorithm?

  • @Yours--Truly
    @Yours--Truly 2 หลายเดือนก่อน

    Being the closest to the green squares in the given examples are also being the farthest away from them. Was that intentional? 😂

  • @hackerbrinelam5381
    @hackerbrinelam5381 2 หลายเดือนก่อน

    5:30 - 5:48 I thought can this run on a neural network?

  • @luke.perkin.inventor
    @luke.perkin.inventor 2 หลายเดือนก่อน

    Does this really scale? Rather than 3 warehouses in 2D, what if it was W warehouses in N dimensions? Like 100 in 100? It seems like there's a lot of arbitrary choices in the fitness function, or is there theoretical grounding?

    • @jeffreychandler8418
      @jeffreychandler8418 2 หลายเดือนก่อน +1

      From what I gathered from my limited experience, these swarm algorithms can be amazing in complex optimization problems (so rather than finding the minimum, it's finding minimums maximums, midpoints, etc), however it's scaling is pretty poor. Backpropagation is just insanely efficient, while this basically calculate pairwise distances, then uses those to create vectors, then has a memory component, plus a global memory component, for multiple points. The multiples multiply quickly.
      As far as the fitness function. You must define the actual optimization more explicitly than most ML applications, which is theory based. Weighing the vectors is similar to learning rate with gradient descent. There's no one size fits all answer, but there's rules of thumb that are generally good.

    • @luke.perkin.inventor
      @luke.perkin.inventor 2 หลายเดือนก่อน +1

      @@jeffreychandler8418 Thanks for explaining. I looked a little more into it too and even the trade offs involved in nearest neighbour search are quite nuanced, figuring out for a given problem how much to invest in precomputing a graph/tree/reduced dimensionality approximation first, or just do N comparisons every step for each particle.

    • @jeffreychandler8418
      @jeffreychandler8418 2 หลายเดือนก่อน +1

      @@luke.perkin.inventor that is the fun part of optimization, it is an endless rabbit hole of odd nuances. Like I've worked on computing nearest neighbors to predict stuff and used a lot of little tricks to avoid expensive pairwise calculations, sorts, etc.

  • @lancemarchetti8673
    @lancemarchetti8673 หลายเดือนก่อน

    Wow

  • @silpheedTandy
    @silpheedTandy 2 หลายเดือนก่อน +6

    please make the background music at least half as loud for future videos, or even quieter. i want to watch the video, but it's too draining to try to hear (and underestand/process) your narration from underneath that background music, so i quit watching the video.

    • @b001
      @b001  2 หลายเดือนก่อน +2

      Noted. After all these years I’m still learning and struggling to find the right audio levels, and video ambience. Thanks for the feedback!

    • @iamtraditi4075
      @iamtraditi4075 2 หลายเดือนก่อน +8

      Fwiw, I personally didn’t mind this level of background audio

    • @user-bf3uy5ve9k
      @user-bf3uy5ve9k 2 หลายเดือนก่อน +1

      ​@@iamtraditi4075I did find it quite distracting, probably not as much as OP though.

    • @lukurra
      @lukurra 2 หลายเดือนก่อน +1

      ​@@b001 a swarm of watchers nudging you towards an answer!
      I suggest looking into adding a bit of sidechain compression. it would make the music move aside in response to your voice, increasing its importance and focus, but leaving the ambiance untouched.

    • @rosettaexchangeengine141
      @rosettaexchangeengine141 หลายเดือนก่อน

      I agree. However, it is a difficult problem for the creator since it is so dependent on the listener's ears. It is bizarre that after 19 years of TH-cam they still don't allow posting of a multiple track audio so the listener can adjust the background music themselves.

  • @Extner4
    @Extner4 2 หลายเดือนก่อน

    first!

  • @TheMaxKids
    @TheMaxKids 2 หลายเดือนก่อน

    second!

  • @DemetriusSteans
    @DemetriusSteans 2 หลายเดือนก่อน

    Enough ais and you can generate a realistic chunk of a 3 dimensional object in a simulation.

  • @42svb58
    @42svb58 2 หลายเดือนก่อน +1

    The logic is flawed from over simplification! Thus, the principle does not consider evolutionary and biological factors shaping this behavior. We still do not understand this behavior we'll enough to apply policy optimization towards AIML