is this a neural network? at 0:54 there doesn't seem to be any mention of bias or activation functions and I think the equation is supposed to be a weighted sum so i1 * w1 + i2 * w2 + i3 * w3 ... i_n * w_n + b and then activation functions (if there are any). Edit: So, after watching the whole video, I'm a bit confused but I think there are some things that are incorrect. NOTE: I might be wrong as well, I'm not an ML expert but rather a hobbyist. 1) It seems like they're trying to do RL with neural networks and a genetic algorithm (ish) approach. At 1:28, I'm fairly certain this is a mistake but I'm not entirely sure. It seems like they're talking about the idea of a living penalty (a negative reward given at each timestep to incentivize the agent to achieve the task as quickly as possible). It should be a set reward to -0.1 at the start of the reward code and all the "change rewards" should be modified to "set rewards." The issue right now is that it's "change" instead of "set" so any existing reward given is slightly penalized by -0.1. This means that if the agent's distance is
he talks about the formula, but i have never seen a formula that just multiplies EVERYTHING TIMES EVERYTHING. Seems odd. Normally you have the activation function of the sum of the individual weighted inputs of a neuron.
From scratch? IN SCRATCH!
Not that impressive tbh, Just 3 layers
I got recommended this a day after I just made a neural network on scratch
me too
same
this is a good demo on how neural networks work but actually training on this would be really bad as it is quite slow
why is The Joker making a neural network tutorial in scratch?
wtf. why did i get this recommended 4 hours after upload lmao this is epic
is this a neural network? at 0:54 there doesn't seem to be any mention of bias or activation functions and I think the equation is supposed to be a weighted sum so i1 * w1 + i2 * w2 + i3 * w3 ... i_n * w_n + b and then activation functions (if there are any).
Edit: So, after watching the whole video, I'm a bit confused but I think there are some things that are incorrect. NOTE: I might be wrong as well, I'm not an ML expert but rather a hobbyist.
1)
It seems like they're trying to do RL with neural networks and a genetic algorithm (ish) approach.
At 1:28, I'm fairly certain this is a mistake but I'm not entirely sure. It seems like they're talking about the idea of a living penalty (a negative reward given at each timestep to incentivize the agent to achieve the task as quickly as possible). It should be a set reward to -0.1 at the start of the reward code and all the "change rewards" should be modified to "set rewards." The issue right now is that it's "change" instead of "set" so any existing reward given is slightly penalized by -0.1. This means that if the agent's distance is
I didn't watch the whole video tho cuz I was too lazy lol
@@kevishere_21 it probably took you more time to write two comments than to watch until the end...
@@goid314 lol that's weirdly true I should prob watch the whole video but I'm too lazy I'll do it later
@@goid314 OK, just watched the whole video, and I see some problems. I'll update the original comment rq
yup, his approach isn't quite the "normal" calculation of neural network output.
Don't take every video on the internet as the holy grail lol.
thanks this is an easy explanation how neural network works.
he talks about the formula, but i have never seen a formula that just multiplies EVERYTHING TIMES EVERYTHING. Seems odd.
Normally you have the activation function of the sum of the individual weighted inputs of a neuron.
couldent find a good tutorial and i got no clue how it works im just gluing together code
good
Love the video but the microphone...
even better
O~O
this is really poorly optimized man, use some loops and idx variables to do the repeated calculations. is this neat alg?
idk the algorithm names it just randomly mutates and the best doin ones get saved and then it uses the best ones the next gen and mutates them
@@yyhhttcccyyhhttccc6694 does it only mutate the weights and bias or can it create new neurons and connections