Deep Learning(CS7015): Lec 3.5 Representation Power of Multilayer Network of Sigmoid Neurons

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 40

  • @anirudhnegi6724
    @anirudhnegi6724 4 ปีที่แล้ว +12

    Sir, you are a maths artist!

  • @subarnasamanta4945
    @subarnasamanta4945 4 ปีที่แล้ว +11

    salute to you sir please upload ml classes too

  • @prasadhegde5246
    @prasadhegde5246 4 ปีที่แล้ว +7

    Logistic function is written wrong at 10:18, should have been +e instead of -e in the denominator.

    • @mratanusarkar
      @mratanusarkar 4 ปีที่แล้ว

      Exactly! I was wondering the same...

    • @e555t66
      @e555t66 ปีที่แล้ว

      I spent 10 mins on desmos thinking the same.

  • @AbhishekSingh-fj2so
    @AbhishekSingh-fj2so 5 วันที่ผ่านมา

    Need to watch again

  • @saumyashah6622
    @saumyashah6622 3 ปีที่แล้ว +1

    Last video was one hell of a ride. Phew!!

  • @meghnadesai6343
    @meghnadesai6343 4 ปีที่แล้ว +2

    Thank you, Great explanation!

  • @kalyanksnlp
    @kalyanksnlp 6 ปีที่แล้ว +2

    Simply superb.

  • @sarrae100
    @sarrae100 3 ปีที่แล้ว

    Superbly intuitive !!

  • @copaceanubobi6101
    @copaceanubobi6101 4 ปีที่แล้ว

    wow amazing teacher.

  • @keenchkaat1543
    @keenchkaat1543 2 ปีที่แล้ว

    universal approximation theorem: 3:10 (multilayer representation of sigmoid neurons)

  • @umang9997
    @umang9997 ปีที่แล้ว

    15:22
    I think sir has quoted it wrong.
    For red points we find oil , since we have a tower above red points.
    and for blue points we don't find oil, since we don't have a tower above them.

  • @dnyanesh
    @dnyanesh 5 ปีที่แล้ว +3

    superb Mitesh sir

  • @VishalDeswhal
    @VishalDeswhal ปีที่แล้ว

    amazing

  • @rohit2761
    @rohit2761 3 ปีที่แล้ว +1

    Dear Sir, You have uploaded an art. Please upload ML Classes also

  • @scifimoviesinparts3837
    @scifimoviesinparts3837 3 ปีที่แล้ว

    could someone please explain what happens at 13:47 timestamp ? How come we add 1 and subtract 1 to get that tower ?

    • @shubhamparida2584
      @shubhamparida2584 2 ปีที่แล้ว

      actually it is: multiply 1 to h11 and multiply -1 to h12 and now we add h11 and -h12 to get h21

  • @draviaartistwithbat5756
    @draviaartistwithbat5756 4 ปีที่แล้ว +1

    How to learn the hight in the diagram using sigmoid neural?
    W - we will keep it high to make it work as a perceptron.
    B - it is used to shift the sigmoid neural.
    Then how this model will learn the height?

    • @jimmymathew8540
      @jimmymathew8540 4 ปีที่แล้ว +2

      The weights of the hidden neurones are used for controlling the steepness of the sigmoid fun. And the bias is used for controlling the transformation point or the step point of the sigmoid fn. When the output of hidden neurone are added and controlling the weights and biases of this neurones appropriately we get a tower fn. Now we have to control the height of the tower fn. The height of the tower fn. is controlled by the weight of the output Neurone. Now the appropriate weights and biases of each neurones are learned by Gradient descend.

  • @rockysingla2007
    @rockysingla2007 5 ปีที่แล้ว +2

    Does this also mean that we can as well use a network of perceptrons in place of sigmoids to approximate any continuous function?

    • @TheDelcin
      @TheDelcin 5 ปีที่แล้ว +4

      Theoretically it is possible based on the explanation given, but we are not using it in practice because a perceptron outputs either 0 or 1. Hence from probabilistic point of view it do not make sense. Watch previous videos if you have already recollect it

    • @gauthambekal235
      @gauthambekal235 3 ปีที่แล้ว +2

      In reality you cant use a step function because the weights are calculated by backpropagation and you cannot backpropagate through a step function

    • @piyush-A.I.-IIT
      @piyush-A.I.-IIT ปีที่แล้ว

      No! because step function is not continuous and thus you cannot find derivative of final loss function with respect to parameters.

  • @iisc2022
    @iisc2022 ปีที่แล้ว

    Anyone mind providing the link professor is referring to?

  • @Arigato009
    @Arigato009 2 หลายเดือนก่อน +1

    Didn’t understand anything a single penny the way of teaching is worse than our teacher kanji❤️

    • @bagfull
      @bagfull 2 หลายเดือนก่อน

      True

  • @pindikrishnachandraprasad2889
    @pindikrishnachandraprasad2889 2 ปีที่แล้ว +3

    Do we need 2*n number of neurons in case of n dimensions ? Can someone correct this please ?

  • @saumyachaturvedi9065
    @saumyachaturvedi9065 11 หลายเดือนก่อน

    Sir, I have a doubt. Instead of making tower function from two step function, can we approximate it using other functions, such as gaussian?

    • @jsridhar72
      @jsridhar72 10 หลายเดือนก่อน

      Yes you can. People use Relu or Tanh instead of step functon which is not diferentialbe at 0, whereas ReLu is.

  • @maheshwarrankarthikeyan8630
    @maheshwarrankarthikeyan8630 6 ปีที่แล้ว

    Shouldn't the 3D tower retain only those values greater than or equal to 2?

    • @amoghsinghal346
      @amoghsinghal346 2 ปีที่แล้ว +2

      The threshold is set at >= 1, so if you observe closely, we are also counting the base of the tower as well

  • @e555t66
    @e555t66 ปีที่แล้ว +3

    Logistic function is written wrong at 10:18, should have been +e instead of -e in the denominator.