Why Initialize a Neural Network with Random Weights || Quick Explained

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 21

  • @DevelopersHutt
    @DevelopersHutt  3 ปีที่แล้ว

    Please leave feedback if you can. It means a lot to me

  • @ESKALATE-esir
    @ESKALATE-esir 9 หลายเดือนก่อน +1

    Your ppt is amazing

  • @MsPOOJA0123
    @MsPOOJA0123 ปีที่แล้ว +1

    This channel is clearly underrated.. Kudos bro..

  • @karangoyal9602
    @karangoyal9602 3 ปีที่แล้ว

    You are doing an amazing job. This way of first explaining the concept and then showing its implementation via proper code is superb!! Keep growing.

  • @vipingautam9501
    @vipingautam9501 ปีที่แล้ว

    Great! you're single video was enough to break the symmetry

  • @eleanortay7351
    @eleanortay7351 ปีที่แล้ว

    Simple and Wonderful Explanation. Great Thanks!

  • @neerajverma9226
    @neerajverma9226 3 ปีที่แล้ว +3

    Make Playlist for building neural network from scratch. Without using any library

    • @DevelopersHutt
      @DevelopersHutt  3 ปีที่แล้ว

      😳 without even numpy?? Nice idea
      I will try to do it as soon as possible

    • @neerajverma9226
      @neerajverma9226 3 ปีที่แล้ว

      @@DevelopersHutt whenever you do inform me

  • @mandarchincholkar5955
    @mandarchincholkar5955 3 ปีที่แล้ว +1

    Thanks alot brother.. love you lot.. keep going.. ❤️❤️❤️

  • @AmirhosseinKhademi-in6gs
    @AmirhosseinKhademi-in6gs 2 ปีที่แล้ว

    Thanks for your clear explanation.
    Here I have a question about activation functions:
    In case we are doing a non-linear regression and dealing with a "PDE of 2-3 degrees of derivatives", can we use a non-differentiable activation function such as ReLU, or we should necessarily use an infinitely differentiable one like Tanh?

  • @hoaxuan7074
    @hoaxuan7074 3 ปีที่แล้ว +1

    Low curvature initialization is a good idea. Random initialization is very high curvature and it is very difficult to smooth that out.

    • @DevelopersHutt
      @DevelopersHutt  3 ปีที่แล้ว

      Well random word here refers to normal distribution. Which is currently most of the neural network uses to initialize weights.

    • @hoaxuan7074
      @hoaxuan7074 3 ปีที่แล้ว

      @@DevelopersHutt The problem is a lot of noise in the net will remain after training. Anyway people are free to try different things.

  • @kvetcka
    @kvetcka 3 ปีที่แล้ว

    why first layer also equals 0.5??

  • @zatoitche
    @zatoitche 3 ปีที่แล้ว

    Very well done. Many thanks

  • @ahmedsameh4191
    @ahmedsameh4191 ปีที่แล้ว

    thank you its useful

  • @jasonokoro8400
    @jasonokoro8400 2 ปีที่แล้ว

    hope ur gradients are ok :)

  • @hoaxuan7074
    @hoaxuan7074 3 ปีที่แล้ว

    Off topic: There is a thing called Fast Transform fixed filter bank neural nets. I can't go into details because that leads to comment removal.

  • @demetriusdemarcusbartholom8063
    @demetriusdemarcusbartholom8063 2 ปีที่แล้ว

    ECE 449 UofA