Deep Learning(CS7015): Lec 2.8 Representation Power of a Network of Perceptrons

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ก.ย. 2024
  • lec02mod08

ความคิดเห็น • 36

  • @newbie8051
    @newbie8051 11 หลายเดือนก่อน +4

    2:53, 7:54 These small fun moments are what is missing in other NPTEL courses. Thank you sir for taking the extra effort to make us feel like we are part of a learning programme too !

  • @kewtomrao
    @kewtomrao 2 ปีที่แล้ว +8

    This series has been exceptional!!

  • @Zero-ss6pn
    @Zero-ss6pn 10 หลายเดือนก่อน +2

    You deliver high quality content in such an elegant manner

  • @Shradha_tech
    @Shradha_tech 5 ปีที่แล้ว +4

    You made it so easy and effortless. Thank you so much

  • @HimanshuSingh-qj5ys
    @HimanshuSingh-qj5ys 5 ปีที่แล้ว +4

    Myself Mechanical Student Can Learnt Well Just Wow Teacher

  • @desiquant
    @desiquant 3 ปีที่แล้ว +12

    5:15 If he used the notation of -1 for False and +1 for True in the very beginning, why did he create the truth table with 0s and 1s? It only added to the confusion!

    • @katurivinay3436
      @katurivinay3436 ปีที่แล้ว +1

      yes i am also confused but later i realised

    • @vaibhavlaxman4959
      @vaibhavlaxman4959 ปีที่แล้ว

      truth table is made for only 0 and 1

  • @sivagamasundari8017
    @sivagamasundari8017 6 ปีที่แล้ว +4

    Excellent explanation..Thanks sir!

  • @desiquant
    @desiquant 3 ปีที่แล้ว +2

    At 6:26, the first condition for the XOR should be w0 + w1 w1 < -w0. Why has he written w1 < +w0? I understand that in all previous examples, we have taken the bias term to be negative. But here we are writing the variable and in summation, we don't write negative. So, the summation is Σw_i x_i where i is from 0 to 2. x0 is 1 and w0 has no value assigned to it. Does he want to assign w0 = -w0? Only then it would be -w0 + w1 < 0 => w1 < w0. Some weird shit!

  • @rishabhsingh6842
    @rishabhsingh6842 3 ปีที่แล้ว +1

    Bahut mast smjhate ho sir🙏.

  • @bharathraman2098
    @bharathraman2098 4 ปีที่แล้ว +5

    For scenario 1, x1 = 0, x2=0, w1 = -1, w2=-1, w0=2. I don't understand how h1 will fire. Cause -2+(-1*0)+(-1*0) = -2 which is not greater than zero. So, it should not fire. In fact, none of the 4 middle layer perceptrons should fire. Yet, the outcome I see is that h1 is firing. How? What am I missing?

    • @aniketmohanty9501
      @aniketmohanty9501 4 ปีที่แล้ว +5

      I'm assuming you are confused at the truth table shown at 5:15. The truth table only has input values as false or true depicted as 0, 1 respectively. But when we pass these inputs through the perceptron we consider false input as -1 and true input as +1 (for which hidden layer neurons fire). We can also build the perceptron to take 0,1 as inputs just by changing the bias. It's just a matter of choice. You can also build a perceptron by taking -3 for false and -1 for true by adjusting the biases appropriately. Though I agree, that the instructor should have explained his transition of input values.

    • @dailyDesi_abhrant
      @dailyDesi_abhrant 4 ปีที่แล้ว +5

      You are missing the fact that the professor had mentioned earlier that in this model, we will consider -1 as false input and +1 as true input. So -1*-1 + -1*-1 + -2 = 0 and hence h1 will fire.

  • @saurabhsuman2506
    @saurabhsuman2506 6 ปีที่แล้ว +11

    sir, I think for 2 input perceptron network bias would be 0,1,1,2 instead of -2 for each perceptron

    • @umairalvi7382
      @umairalvi7382 4 ปีที่แล้ว

      No carefully hear what he is saying if the input is -1,-1 etc.Then it will make sense.

    • @pranavsawant1439
      @pranavsawant1439 4 ปีที่แล้ว +1

      True=+1 and False=-1 as stated in 1:09. The rest should then make sense.

  • @378_anshumansharma5
    @378_anshumansharma5 3 ปีที่แล้ว +2

    at 5.33 how is h1 = 1? pls help me out

  • @harpreetdogra222
    @harpreetdogra222 2 ปีที่แล้ว

    When he said why do we care about Boolean function my first response was to watch my favourite movie 😂

  • @nafeesahmad9083
    @nafeesahmad9083 3 ปีที่แล้ว +1

    At 6:40 condition is w2>=w0, we have w0=-2, right? then if I take -1 means according to the condition it is right. So 1*-1=-1 then neuron will not fire and answer will -1 however we have output 1 in XOR case. I am confused here

    • @desiquant
      @desiquant 3 ปีที่แล้ว

      Because he has taken the wrong convention! The condition should be w0 + w2 >= 0 => w2 >= -w0. Put w0=-2 and you will get w2 >= 2

    • @ABHAYKUMAR-rh4jc
      @ABHAYKUMAR-rh4jc ปีที่แล้ว

      @@desiquant yes

  • @nafeesahmad9083
    @nafeesahmad9083 3 ปีที่แล้ว

    Sir Salute

  • @shivkj1697
    @shivkj1697 3 ปีที่แล้ว +1

    what's going on around @6:29

  • @skchakrabarty2003
    @skchakrabarty2003 2 ปีที่แล้ว

    What if the movie goer is irrational? We all are irrational to some extent. How to model that?

  • @ajayprem1703
    @ajayprem1703 3 ปีที่แล้ว

    Now if we use the perceptron learning algorithm on the output perceptron, the weights are first randomly initialized and updating operations would be
    W = W+H or W = W-H where H is the vector of hidden layer outputs right? But as per the network only one of the hidden layer perceptron fire for every input so isn't H having all 0s except for one 1. Thus during the update (W = W +/- H) only one of the weights get changed, and this weight corresponds to that firing hidden layer perceptron. So for unseen inputs the perceptron just hasn't updated the weight and they remain their initial random value??

  • @satviksrivastava6632
    @satviksrivastava6632 2 ปีที่แล้ว

    Sir, but if model correctly classifies all datapoints, then it is an overfit model, how to deal with that, is it explained in later videos???

    • @none-kz4qc
      @none-kz4qc 2 ปีที่แล้ว +1

      its not always the case, if you did very well on train data and poor on test data then its called overfitting. there might be cases where you do well on both train and test data. you always need validation/test data to tell if you overfit or not.

  • @bhaskartripathi
    @bhaskartripathi ปีที่แล้ว

    While this lecture is very good, it lacks the generalized mathematical equation of an MLP. I was expecting a generalized equation to represent an MLP.

  • @Sudhirrt10
    @Sudhirrt10 4 ปีที่แล้ว

    Can anyone please explain me why h1,h2h3 having value (-1,-1),(-1,1),(1,-1) has taken

    • @jayantpriyadarshi9266
      @jayantpriyadarshi9266 4 ปีที่แล้ว

      So that only one of the perceptron triggers for each of the 4 possible input..

    • @prasadhegde5246
      @prasadhegde5246 4 ปีที่แล้ว +2

      It is the input values for h1,h2,h3,h4 when they fire (predicts positive). It is slightly confusing since input "0" is considered as False or "-1" and 1 is considered as Positive or +1.

    • @jimmymathew1580
      @jimmymathew1580 3 ปีที่แล้ว +1

      @ Sudhir Kumar those are the weights of the inputs x1 & x2 coming to the corresponding neuron

  • @anshumansharma1284
    @anshumansharma1284 3 ปีที่แล้ว +1

    at 5.33 how is h1 = 1? pls help me out