RNN #4: Backpropagation through time | BPTT | RNN

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ก.ย. 2024
  • Backpropagation through time
    Watch the Reinforcement Learning course on Skillshare: skl.sh/2WHyoVG
    Join Skillshare using this link to get 2 months free Premium Membership: skl.sh/2JQqXpi
    Best Books on Machine Learning :
    1. Introduction to Machine Learning with Python: A Guide for Data Scientists: amzn.to/2TLlhAR
    2. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems: amzn.to/2wKtPij
    3. Pattern Recognition and Machine Learning (Information Science and Statistics): amzn.to/33aNXpL
    4. Deep Learning with Python - François Chollet: amzn.to/39ISNgv
    5. Deep Learning (Adaptive Computation and Machine Learning series) - Ian Goodfellow: amzn.to/2vMPVR7
    6. Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) - Kevin P. Murphy: amzn.to/33aNrYN

ความคิดเห็น • 4

  • @namansharma9490
    @namansharma9490 2 ปีที่แล้ว +1

    hi i have a small doubt
    how to change/ tune variable values of wa and those values between the hidden layer nodes
    ?

  • @sktt1faker516
    @sktt1faker516 3 ปีที่แล้ว +2

    Can you tell me what is deferrent between backpropagation through time and backpropagation?

    • @willboler830
      @willboler830 2 ปีที่แล้ว +1

      Backpropagation through time is simply a method of applying backpropagation to an unrolled recurrent neural network. Given a RNN, it can be unrolled to a standard feed forward network with shared weights, where each layer in the new network is an iteration in time forward. You backpropagate this unrolled network through time, updating the weights as you go. The only major difference is the shared weights. You can imagine, the weights stay fixed in the unrolled network for each iteration forward in the sequence, but as you backpropagate the loss, the weights are momentarily decoupled and are updated iteratively. The final set of weights at the termination of the backprop (first layer) are now your new set of weights for the RNN. This is how a RNN can handle variable input sequences.

  • @ruchiryadav8819
    @ruchiryadav8819 2 ปีที่แล้ว

    wah kya samjhaya hai ............wah