ไม่สามารถเล่นวิดีโอนี้
ขออภัยในความไม่สะดวก

Hebb rule with solved example

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ส.ค. 2024
  • #neuralnetwork #softcomputing #machinelearning #datamining #algorithm
    Hebb algorithm | soft computing | neural networks
    Introduction:1.1 Biological neurons, McCulloch and Pitts models of neuron, Types
    of activation function, Network architectures, Knowledge representation, Hebb net
    1.2 Learning processes: Supervised learning, Unsupervised learning and
    Reinforcement learning
    1.3 Learning Rules : Hebbian Learning Rule, Perceptron Learning Rule, Delta
    Learning Rule, Widrow-Hoff Learning Rule, Correlation Learning Rule, WinnerTake-All Learning Rule
    1.4 Applications and scope of Neural Networks
    10
    2
    Supervised Learning Networks :
    2.1 Perception Networks - continuous & discrete, Perceptron convergence theorem,
    Adaline, Madaline, Method of steepest descent, - least mean square algorithm,
    Linear & non-linear separable classes & Pattern classes,
    2.2 Back Propagation Network,
    2.3 Radial Basis Function Network.
    12
    3
    Unsupervised learning network:
    3.1 Fixed weights competitive nets,
    3.2 Kohonen Self-organizing Feature Maps, Learning Vector Quantization,
    3.3 Adaptive Resonance Theory - 1
    06
    4
    Associative memory networks:
    4.1 Introduction, Training algorithms for Pattern Association,
    4.2 Auto-associative Memory Network, Hetero-associative Memory Network,
    Bidirectional Associative Memory,
    4.3 Discrete Hopfield Networks.
    08
    5
    Fuzzy Logic:
    5.1 Fuzzy Sets, Fuzzy Relations and Tolerance and Equivalence
    5.2 Fuzzification and Defuzzification
    5.3 Fuzzy Controllers

ความคิดเห็น • 47

  • @nikronic
    @nikronic 4 ปีที่แล้ว +5

    Very clear explanation. Just note that we can obtain weight values using matrix operations like Y.dot(X.transpose())

  • @MarcTuinier
    @MarcTuinier 5 ปีที่แล้ว +11

    This was a relatively simple explanation of the Hebbian rule (learning?) in a neural network.
    What I learned:
    - Used for associating, classifying and categorizing patterns
    - needs an input and an output (so not useful for unsupervised learning?)
    - a way of classifying weights
    Disclaimer: I'm new to this stuff :P

    • @tfltd.474
      @tfltd.474 4 ปีที่แล้ว +1

      hebbian is actually unsupervised

    • @storgerbenevolent5678
      @storgerbenevolent5678 3 ปีที่แล้ว

      @@tfltd.474 yeah , why in this video has he taken the output target?

    • @dep2460
      @dep2460 2 ปีที่แล้ว

      @@storgerbenevolent5678 in question we are given the class in which "i" and "o" will belong(1 for i and -1 for o) ... we are to modify weight in such a way so that it happens. Hebb is unsupervised

  • @anuragsharma1065
    @anuragsharma1065 5 ปีที่แล้ว +2

    Sir you teach better than most of my teachers and friends .
    thanks :)

  • @dikshantjain6544
    @dikshantjain6544 5 ปีที่แล้ว +3

    Glad sir you are back again
    Upload more videos
    Without you I cannot pass soft computing

  • @aniruddhkhare2080
    @aniruddhkhare2080 5 ปีที่แล้ว +1

    Bahut badhiya bhai! Maza aa gya

  • @shellnetpeak9466
    @shellnetpeak9466 5 ปีที่แล้ว +5

    thank you so much for this wonderful class but I have a doubt when we can stop the problem

  • @knowmore3536
    @knowmore3536 5 ปีที่แล้ว +1

    nice explanation...simple explanations...easy to understand..

  • @saranc7865
    @saranc7865 5 ปีที่แล้ว

    Nice Explanation. But the entire video is just about solving problems, if the underlying concepts are also explained then it would have been spot on.

  • @HemantSingh-si3vh
    @HemantSingh-si3vh 5 ปีที่แล้ว +1

    Thank u sir for these awesome videos.

  • @rabifazil7627
    @rabifazil7627 ปีที่แล้ว

    Clear explanation

  • @geetanjaliwadhwa128
    @geetanjaliwadhwa128 5 ปีที่แล้ว +1

    Thank god i got a better video

  • @infouniverse1728
    @infouniverse1728 2 ปีที่แล้ว

    Best explanation

  • @amsaraza1210
    @amsaraza1210 3 ปีที่แล้ว +1

    Thankuuu very very much

  • @gauravbhandari1184
    @gauravbhandari1184 5 ปีที่แล้ว +1

    Great

  • @Mobidost
    @Mobidost 5 ปีที่แล้ว +1

    Why you didn't mention the learning rate in any of hebb rules? How will you train any neuron even If you are not using learning rate factor to avoid saturation.

  • @nishanisha6263
    @nishanisha6263 4 ปีที่แล้ว +1

    Thank you..for this tutorial

    • @vikramPahaadi99
      @vikramPahaadi99 4 ปีที่แล้ว

      NO PROBLEM MY LOVE I M ALWAYS THERE FOR U

  • @naveench4247
    @naveench4247 5 ปีที่แล้ว +2

    Awesome explanation! Liked it! Bro, can you please tell me the book name from which you have taken this problem?

  • @dikshantjain6544
    @dikshantjain6544 5 ปีที่แล้ว +2

    Before 1k 😊 views

  • @fit_foodie_techie
    @fit_foodie_techie 5 ปีที่แล้ว

    Thank you please upload frequently

  • @randythamrin5976
    @randythamrin5976 3 ปีที่แล้ว

    Indian guys selalu bisa diandalkan

  • @simransingh6137
    @simransingh6137 2 ปีที่แล้ว

    Thank you sir

  • @vaibhavsingh3378
    @vaibhavsingh3378 5 ปีที่แล้ว +1

    Thanks

  • @Phoenix-wr6rn
    @Phoenix-wr6rn ปีที่แล้ว

    Why was bias not drawn in the hebb network

  • @srinathsesaya7432
    @srinathsesaya7432 5 ปีที่แล้ว

    Thank you

  • @storgerbenevolent5678
    @storgerbenevolent5678 3 ปีที่แล้ว

    Hi , nice video
    I have read that Hebb learning rule uses unsupervised learning then why are we using output target here?

  • @kanangarayev6110
    @kanangarayev6110 2 ปีที่แล้ว

    I can't understand why we need bias?

  • @prithambaswanigiryalkar4815
    @prithambaswanigiryalkar4815 ปีที่แล้ว

    Hmare ma'am ne bhi yehi tutorial dekha hoga😂

  • @muzzamilwaqas3766
    @muzzamilwaqas3766 2 ปีที่แล้ว

    Weights will be updated in every bipolar question? or we can get yin from every pattern

  • @DeepakGupta-gu3ul
    @DeepakGupta-gu3ul 5 ปีที่แล้ว +1

    🤗🤗

  • @eshamnarula4942
    @eshamnarula4942 5 ปีที่แล้ว +1

    Why hebb rule used bipolar data instead of binary data?

    • @nikronic
      @nikronic 4 ปีที่แล้ว +1

      Because if you use 0 instead of -1, every time you want to introduce input regarding class 0 (originally -1), the multiplication of x.y will be zero and weights never update for the opposite class.

  • @auliafaza866
    @auliafaza866 5 ปีที่แล้ว +4

    I need indonesian subtitle haha. Anyway i need tutorial of delta rule too

    • @vikramPahaadi99
      @vikramPahaadi99 4 ปีที่แล้ว

      NO PROBLEM AULIA TAKE TUTORIAL FROM ME

    • @randythamrin5976
      @randythamrin5976 3 ปีที่แล้ว

      apalagi ngomongnya cepat banget,

  • @saosovannarith7736
    @saosovannarith7736 3 ปีที่แล้ว

    I'm new to this stuff, how to get the target value?

  • @dewinmoonl
    @dewinmoonl 3 ปีที่แล้ว

    watch this on 2x speed for maximum hebbian confusion :D

  • @monishap1745
    @monishap1745 4 ปีที่แล้ว

    In the Hebb network drawn, we have to draw the bias too right?

    • @nisarggogate8952
      @nisarggogate8952 3 ปีที่แล้ว

      bias zero i guess... so it doesn't matter

  • @lovely_kratos7134
    @lovely_kratos7134 ปีที่แล้ว

    U explained too fast,...!!but tnx anyway

  • @kamlesh6290
    @kamlesh6290 3 ปีที่แล้ว

    I have mailed you. Can you please reply on it. .. Need your helo

  • @manavpoddar2262
    @manavpoddar2262 5 ปีที่แล้ว

    Bhai thoda araam se padhafo, itni jaldi kya hai

  • @crater7531
    @crater7531 4 ปีที่แล้ว

    please... slow... down...gawd