Convolutional Differentiable Logic Gate Networks - NeurIPS Oral - difflogic

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ม.ค. 2025

ความคิดเห็น • 36

  • @boulabiar
    @boulabiar 2 หลายเดือนก่อน +29

    Impressive.
    Some ideas look so natural, so useful that we ask ourselves why we haven't tried this before.
    Congrats!

  • @swordofthemorning777
    @swordofthemorning777 หลายเดือนก่อน +6

    Came here after I saw a picture of the poster from NeurIPS. I wish more deep learning researchers made videos explaining their papers in brief. Great use of Manim.

  • @I_am_who_I_am_who_I_am
    @I_am_who_I_am_who_I_am หลายเดือนก่อน +7

    We are working on the same topic, except that I'm treating them as superpositions rather than differentiable parts. Congrats on beating me up to this!

  • @sam.scrolls
    @sam.scrolls 2 หลายเดือนก่อน +8

    Logic gates is an interesting idea for model to learn. And the time taken in nanoseconds is insane !!

  • @physiologic187
    @physiologic187 2 หลายเดือนก่อน +6

    Wow! This is what real research is about. What a cool idea!! Keep uo the great work.

  • @SapSapirot
    @SapSapirot 5 วันที่ผ่านมา

    Incredibly interesting and well thought of. Well done!

  • @robharwood3538
    @robharwood3538 หลายเดือนก่อน +1

    Just awesome! I've been wondering about how logic and neural networks could be combined for a long time, and this seems exactly the kind of thing I was hoping was possible! Amazing work, guys!

  • @potisseslikitap7605
    @potisseslikitap7605 19 วันที่ผ่านมา +4

    As long as you use nonlinear structures, you can design many different neural networks. This structure will be fast because it can be reduced to the current computer architecture, but in essence it is a solution to a curve fitting problem.
    It is possible to distill existing networks to much faster versions. I think the real success will come from analog networks.

  • @andrewowens5653
    @andrewowens5653 2 วันที่ผ่านมา +1

    YAY !! I didn't read the paper yet, but if you can train 20 layers of logic, and you pipeline it, that would be about another 20x Improvement in speed!

  • @shehrozeshahzad4363
    @shehrozeshahzad4363 2 หลายเดือนก่อน +1

    Mind blowing amazing and it reduces computation too 👏!

  • @סרטוניםבעמ
    @סרטוניםבעמ 2 หลายเดือนก่อน +7

    Could you please share the manim code for this video? Thank you!

  • @AmanSharma-ug6sr
    @AmanSharma-ug6sr 2 หลายเดือนก่อน +1

    Amazing work!

  • @emmanuelbalogun6757
    @emmanuelbalogun6757 2 หลายเดือนก่อน

    I love this. Thought about this a while ago, great work!

  • @phaZZi6461
    @phaZZi6461 2 หลายเดือนก่อน +2

    this is extremely cool

  • @_XoR_
    @_XoR_ 13 วันที่ผ่านมา +1

    This is so cool congrats. Two questions:
    1. do you think it can be applied to models that mimick the attention mechanism via convolutions so we can also apply this to transformer like architectures?
    2. how do you think it compares to methods that convert the underlying NNs to decision trees?

  • @immi0815CoC
    @immi0815CoC 15 วันที่ผ่านมา

    wow, this was really interesting! thanks for the video, will definitely read the paper

  • @javax0
    @javax0 24 วันที่ผ่านมา +1

    I think conv diff logic gate network is not purely combinational (because of the sliding logic gate kernels) and thus requires some registers to store intermediate results, right? If right, the merits of the logic gate network might be quite diminished by the power-hungry registers with very-high speed clocks.

  • @ahmadimran1274
    @ahmadimran1274 13 วันที่ผ่านมา

    Interesting!

  • @DTUElectro
    @DTUElectro 12 วันที่ผ่านมา

    Hey Felix, very cool video and idea! Never thought of that myself!
    My question is, do you or anyone you know, plan on implementing this as a library that can convert these neural networks into HDL?

  • @SuperJg007
    @SuperJg007 หลายเดือนก่อน

    this is very interesting. cheers 🎉

  • @ravipratapmishra7013
    @ravipratapmishra7013 หลายเดือนก่อน

    Nice, just want to know, what push you guys to use gates.

  • @patrickl5290
    @patrickl5290 หลายเดือนก่อน

    Damn, this is impressive. Where did you get the idea?

  • @mprone
    @mprone หลายเดือนก่อน +4

    Are DLGN interpretable (or more interpretable) than traditional neural networks? Afaik, the field of digital design is "very well understood" and HW designers have been using synthesizers for 40+ years to map HDL code (i.e. operations) to real logic circuit. I am wondering if any of this could be reused to understand what a logic gate (or a group of them) at a given layer is doing to perform the task -- something that is arguably not really possible in traditional DNN.

    • @I_am_who_I_am_who_I_am
      @I_am_who_I_am_who_I_am หลายเดือนก่อน

      their differential gates can be completely replaced by others during backpropagation which makes this unsuitable for hardware, unless you re-route prior layer to different routes for which we know apriori the behaviors. Thus, in hardware you need to create a lot of redundancy which is not a bad thing. The whole brain is redundant starting from the fact of having two separately functionating hemispheres. The idea is amazing. I'm working on something similar, but the gates are in probabilistic superposition. Hence, they don't have to be replaced with others but rather turned partially on and off. Certainly, I have a lot of inherent redundancy inside my model. And since each gate is a single CPU operation, this is extremely fast.

  • @haiyanqin5669
    @haiyanqin5669 23 วันที่ผ่านมา

    wow, so cool. Can you introduce how to run such network on FPGA?

  • @chassemyland
    @chassemyland 2 หลายเดือนก่อน

    This idea seems to potentially be a genius one

  • @Justusv.Hodenberg
    @Justusv.Hodenberg 2 หลายเดือนก่อน +2

    ineresting!

  • @sedthh
    @sedthh หลายเดือนก่อน

    Wow, the implications here for running embedded models with such speed up is amazing!
    Can you elaborate on how you create logic gates for real inputs? After passing any of the weighted functions, the inputs will no longer be binary?

    • @JasminUwU
      @JasminUwU 13 วันที่ผ่านมา +2

      The fully trained network settles on discrete logic gates. You only need it to be differentiable while training.

  • @n45a_
    @n45a_ 14 วันที่ผ่านมา

    interesting, i had something like this in mind for quite a while but im no where experienced enough

  • @zhaoWenxuan-b9s
    @zhaoWenxuan-b9s 10 วันที่ผ่านมา

    I wanna show my respect to you! And by the way, when will you share your new convolutional version code in github? I really hope to test the new version! Thank you!

  • @sampadchowdhury6583
    @sampadchowdhury6583 หลายเดือนก่อน

    I have been very interested in this topic and trying to get my hands into this, however I have a question. Is this scalable? Can we increase the number of inputs from 2 and go beyond?

  • @warpdrive9229
    @warpdrive9229 13 วันที่ผ่านมา

    That's great! But why did you have to look and sound so cute 😭 May God bless you :)

  • @thouys9069
    @thouys9069 5 วันที่ผ่านมา +1

    fucking epic!