Thanks for your clear explanation. Here I have a question about activation functions: In case we are doing a non-linear regression and dealing with a "PDE of 2-3 degrees of derivatives", can we use a non-differentiable activation function such as ReLU, or we should necessarily use an infinitely differentiable one like Tanh?
Please leave feedback if you can. It means a lot to me
Your ppt is amazing
This channel is clearly underrated.. Kudos bro..
You are doing an amazing job. This way of first explaining the concept and then showing its implementation via proper code is superb!! Keep growing.
Great! you're single video was enough to break the symmetry
Simple and Wonderful Explanation. Great Thanks!
Glad it was helpful!
Make Playlist for building neural network from scratch. Without using any library
😳 without even numpy?? Nice idea
I will try to do it as soon as possible
@@DevelopersHutt whenever you do inform me
Thanks alot brother.. love you lot.. keep going.. ❤️❤️❤️
Thanks for your clear explanation.
Here I have a question about activation functions:
In case we are doing a non-linear regression and dealing with a "PDE of 2-3 degrees of derivatives", can we use a non-differentiable activation function such as ReLU, or we should necessarily use an infinitely differentiable one like Tanh?
Low curvature initialization is a good idea. Random initialization is very high curvature and it is very difficult to smooth that out.
Well random word here refers to normal distribution. Which is currently most of the neural network uses to initialize weights.
@@DevelopersHutt The problem is a lot of noise in the net will remain after training. Anyway people are free to try different things.
why first layer also equals 0.5??
Very well done. Many thanks
thank you its useful
hope ur gradients are ok :)
Off topic: There is a thing called Fast Transform fixed filter bank neural nets. I can't go into details because that leads to comment removal.
ECE 449 UofA