SDR Overlap Sets and Subsampling (Episode 3)

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ก.ย. 2024

ความคิดเห็น • 60

  • @MattSiegel
    @MattSiegel 8 ปีที่แล้ว +22

    thanks for making these! i've been reading the papers; but seeing examples and hearing the concepts from another angle really adds a lot :)

  • @dschwartz783
    @dschwartz783 8 ปีที่แล้ว +10

    Just a suggestion: Using a single character for everything is a dirty habit from math, which makes things very hard to read. It would be simpler for everyone to use words instead, at least for marking sliders and whatnot. Nupic could benefit from this too.

  • @ContainerhouseIntl
    @ContainerhouseIntl 8 ปีที่แล้ว +4

    I've watched many Jeff Hawkins lectures on these subjects, but I am enjoying this longer, systematic format. I'll be waiting eagerly for the next one!

  • @nonsense7839
    @nonsense7839 8 ปีที่แล้ว +3

    Game of thrones episode and HTM school episode are the only two shows I wait eagerly for every week!

    • @MatthewTaylor-rhyolight
      @MatthewTaylor-rhyolight 8 ปีที่แล้ว

      +non sense That's the best compliment I've gotten in awhile! ;)

    • @MrMyxomop
      @MrMyxomop 8 ปีที่แล้ว +1

      +Matthew Taylor You should try to sell your series to HBO :)

  • @RonBritvich
    @RonBritvich 8 ปีที่แล้ว +11

    Excellent. I've been waiting all week for the next episode. Very clearly explained.

  • @plutophy1242
    @plutophy1242 ปีที่แล้ว

    it's really helpful to learn the series before reading papers!! you can get a lot of basic concepts and accurate structures! thx!

  • @jeffbarnett6158
    @jeffbarnett6158 6 ปีที่แล้ว +2

    LOL, your bloopers at the end is how I normally lecture :)

  • @kostik
    @kostik 8 ปีที่แล้ว +8

    Can't wait to go deeper & deeper :)

  • @steelskill
    @steelskill 8 ปีที่แล้ว +2

    Big fan of these videos. Its a great intro to HTM:)
    Can you please explain/write out the formula for calculating false positives? I didn't quite understand it from the video.

  • @mdellertson
    @mdellertson 7 ปีที่แล้ว +1

    Thanks so much for making these videos! I'm blown away by the information you're sharing in these. I've wanted to learn more about how the brain works since I read On Intelligence. I really like the way you're introducing the me to these advanced concepts in a way that's easy and enjoyable to digest.

  • @bradleyfrank7933
    @bradleyfrank7933 8 ปีที่แล้ว +6

    These videos are the best. love the bloopers too. ;)

  • @jonclement
    @jonclement 8 ปีที่แล้ว +5

    nice one!
    Before we get too deep into all the transforms, you may want to lead the discussions around the "model" that you're slowly describing. So there's an incoming queue of SDRs which will be matched. SDRs are compressed & stored & matched & source of noise etc. Essentially mimic what you did with describing a real neuron, but draw the basic HTM flowchart / system model

    • @NumentaTheory
      @NumentaTheory  8 ปีที่แล้ว +1

      +Jon Clement I've been thinking about how to move ahead now. I could talk about encoding input data into bit arrays for spatial pooling, or I could go straight from unions into spatial pooling. Haven't decided yet.

    • @jonclement
      @jonclement 8 ปีที่แล้ว

      +HTM Open Source I like the concreteness of the encoding and the low level description of the union operation. you may want to reiterate the benefit of SDRs as not just not storage but as a way to rollup/union and give a solid probability of match. So, from a HTM perspective -- what (and why) are the high level operations that SDRs support. I suppose it's a bit like explaining the benefit of XOR in cryptography.

    • @Smoly322
      @Smoly322 8 ปีที่แล้ว +1

      +HTM Open Source Think that "encoding input data into bit arrays for spatial pooling" is the first step in practice realizing.

  • @dmlled
    @dmlled 5 ปีที่แล้ว +2

    superb explanation of the formula

  • @LeonidGaneline
    @LeonidGaneline 8 ปีที่แล้ว +1

    Matt, TNX! Consistent and thorough explanations.

  • @RobertAllen82
    @RobertAllen82 8 ปีที่แล้ว +2

    This is excellent! I'm waiting in anticipation for more.

  • @omidnikbakht8774
    @omidnikbakht8774 2 ปีที่แล้ว

    sir you look more like some agressive Manchester United fan, than a scientist lmao :D I appreciate the great content !🙏

  • @lasredchris
    @lasredchris 4 ปีที่แล้ว

    Fire - turns it's bit on
    Hierarchy
    Chance of false positive
    Compressing SDR - save indices of on bits

  • @miguelpaolino6508
    @miguelpaolino6508 7 ปีที่แล้ว +2

    Ricky Gervais explaining some fine computer wizardry, I've seen it all.

  • @dr.mikeybee
    @dr.mikeybee 7 ปีที่แล้ว

    There's some light at the end of this lesson, but the middle was dark until then. Cheers.

  • @bzzzvzzze
    @bzzzvzzze 4 ปีที่แล้ว +1

    thank you!

  • @johnholt2458
    @johnholt2458 8 ปีที่แล้ว +4

    really good job creating these videos Matt. I've read "On Intelligence" shortly after it was published and been fascinated with it since then. Kept an eye on Numenta's advances all this time and a few times tried to familiarize with the technology. But I feel your videos really spell it out to me.
    Could you please give us an idea of the roadmap ahead? Obviously episode 4 is at the level of scratching the surface (HTM Kindergarten ;), but I'd like to know when are we going to "get there". How many grades HTM School goes to? :)
    Seriously, I would ideally like to make myself very familiar with HTM from the standpoint of a developer and be able to contribute. Need to have an idea how far away I am from that.
    And a side note, how did you develop these nice interactive screens you're using?
    Take a sabbatical from Numenta and produce more of these episodes, we need them. Cheers!

    • @MatthewTaylor-rhyolight
      @MatthewTaylor-rhyolight 8 ปีที่แล้ว +1

      +john holt My road map is pretty fuzzy, but I think it will end up being between 20-30 total episodes over the next year. These things take a long time to put together, as you might imagine. You can see (and use) the visualization code I used for the SDR displays here: github.com/nupic-community/sdr-viz

  • @NgOToraxxx
    @NgOToraxxx 8 ปีที่แล้ว +3

    You got the best intros! lmao

  • @WilliamBarksdale
    @WilliamBarksdale 7 ปีที่แล้ว +1

    First off, thanks for making these! Kinda feeling that starting with how to compress bit arrays is strange, a broad strokes overview might be helpful for people who are newer to HTM, i'm barely following just because I have some knowledge of machine learning and have read On Intelligence, so can sort of guess where this is all leading.

    • @NumentaTheory
      @NumentaTheory  7 ปีที่แล้ว +2

      I see your point. The compression thing rings a lot of bells with computer scientists.

  • @РоманСереда-с7ч
    @РоманСереда-с7ч 8 ปีที่แล้ว +2

    very interesting, thanks.

  • @mchrgr2000
    @mchrgr2000 7 ปีที่แล้ว +1

    absolutely great thanks a lot

  • @OneOfMany07
    @OneOfMany07 6 ปีที่แล้ว +1

    In the future I suggest you say what you plan to show at the beginning, and the end (not only the end). Was very hard to follow what purpose all of these details had. Knowing where you're going lets me attach these details while you give them. Otherwise I'm getting lots of puzzle pieces that don't seem to connect to anything. Don't know if you expected the viewer to have read something ahead of time or what...

    • @NumentaTheory
      @NumentaTheory  6 ปีที่แล้ว

      This is the fourth episode. Hopefully you've seen those? I also mentioned a paper to read in the show notes above (and in the video) if you want all the details.

    • @OneOfMany07
      @OneOfMany07 6 ปีที่แล้ว +2

      Yep, I've been going through one by one. It was mostly a comment on how these materials were hard for me to digest on their own as presented with a suggestion on how to improve things, and in part a question if I'd been doing this all wrong (just watching the videos, and moving forward after I think I understood the current one...meaning, was I supposed to be reading material before watching?).
      It felt like you were trying to point out details of your example, but I wanted to know "What is this supposed to show me?" first. I tend to be a top down person, preferring to attach new examples and ideas to existing ones. I know not everyone is, but I don't think I'm alone either.
      The next video's comment about showing why SDRs are fault tolerant and noise tolerant would have been a great starting point for each of the sections here. With that as the goal I might have been able to keep track of all the details you slowly went through of your example. As it stood I had to rewatch the whole thing a couple times (didn't need to for the earlier videos).
      Anyway, thanks for doing all this and offering it to the world for free :D

  • @siarez
    @siarez 7 ปีที่แล้ว +1

    To me it seems like the chance of false positives in SDRs is low because we are assuming the encoder is distributing the inputs in the SDR space evenly. In other words, if the encoder encodes semantically different inputs into similar SDRs, we will have a high chance of false positives. If true, this means a big part of the challenge when working with HTMs is developing the encoders, no?

    • @NumentaTheory
      @NumentaTheory  7 ปีที่แล้ว

      It is true that a large part of the challenge working with HTMs is developing encoders. But it is not true that the SDRs encoders create must be distributed. The Spatial Pooler normalizes the input into a new representation that is fully distributed with a normalized sparsity. See the episodes on Spatial Pooling for a full explanation.

  • @chanokin
    @chanokin 8 ปีที่แล้ว +1

    Aren't the videos in the playlist in reverse order?

  • @lasredchris
    @lasredchris 4 ปีที่แล้ว

    Neurons and sdrs
    Neuron - complex patterns recongnition system. Receives sdrs from dendrites
    Sensory input
    When should it fire?

  • @thomasschijf857
    @thomasschijf857 6 ปีที่แล้ว

    Great video, how did you calculate the Change of False Positive. Rate = FP/(FP+TN)

  • @vedhasp
    @vedhasp 6 ปีที่แล้ว +1

    Thanks for the tutorials. I did not understand discussion regarding "chance of a false positive" at 6:48. What is uniqueness of the SDR? Please define/clarify. Many thanks again:)

    • @NumentaTheory
      @NumentaTheory  6 ปีที่แล้ว

      If you have an SDR that represents something important, you want to recognize that. A "false positive" is when you see some SDR, and it randomly contains the semantics of another SDR you might have been expecting, so you classify it incorrectly. One of the points I'm trying to get at in this episode is that this "collision" of randomly false positive values is extremely low, making SDRs very noise tolerant.

    • @vedhasp
      @vedhasp 6 ปีที่แล้ว

      HTM School Thanks. I realised this later, as i revisited the chapter. My false positive calculation differs slightly. Can you direct me to formulae (with and without noise) please?

    • @NumentaTheory
      @NumentaTheory  6 ปีที่แล้ว

      I lifted all my formulae from here: arxiv.org/abs/1503.07469 & arxiv.org/abs/1601.00720

    • @vedhasp
      @vedhasp 6 ปีที่แล้ว

      E.g. episode 4 11:16, at 93% union, probability of false positive, acc to me would be = (round(2048*0.93) choose 40) / (2048 choose 40) =0.05370 !=0.040589

    • @vedhasp
      @vedhasp 6 ปีที่แล้ว

      Oh ok thanks, let me check!!! Many thanks!! :)

  • @justinsostre8470
    @justinsostre8470 ปีที่แล้ว

    Just to nit pick... but I think sparsity should have its inverse taken if we are going to say "low sparsity" since in English it can be confusing. A low number should represent a high sparsity and a low density.

  • @tonni8253
    @tonni8253 4 ปีที่แล้ว +1

    What is the name of the interface or software in which you r working and showing samples?

    • @NumentaTheory
      @NumentaTheory  4 ปีที่แล้ว

      See discourse.numenta.org/t/how-to-run-htm-school-visualizations/2346

  • @lasaventurasdevitoelgatoma3405
    @lasaventurasdevitoelgatoma3405 5 ปีที่แล้ว +1

    Matt, couldn't be this SDR'S be the next type of file system due to the compression capacity n robustness? HTM n Ai aside i mean...

    • @NumentaTheory
      @NumentaTheory  5 ปีที่แล้ว

      Maybe, but we really need neuromorphic chips before we can really take advantage of this.

  • @AlexeySlepov
    @AlexeySlepov 8 ปีที่แล้ว +1

    On 3:33 where n=600, w(X)=40, b=40, w=40 the chance of false positive is 2.3067e-63 which is greater than 0. Is it how binary computer calculates miniscule floating point values or actual chance against the values?

    • @MatthewTaylor-rhyolight
      @MatthewTaylor-rhyolight 8 ปีที่แล้ว +3

      +Alexey Slepov We are dealing with extremely large numbers, so I had to use a math library to calculate the formulas. I've noticed that at the limits of these functions the calculations go a little weird. I believe this is because the numbers are so large. I'm not sure what do to about it, but it only seems to happen at the extreme limits of the formulas.

    • @AlexeySlepov
      @AlexeySlepov 8 ปีที่แล้ว +1

      +Matthew Taylor for me as a programmer it makes sense to have such numbers as a result of calculations with computer. I just wanted to check there was no math behind those non-zero numbers in the SDR model. Now it's all clear. Thanks!