I know Im asking randomly but does someone know a tool to get back into an Instagram account?? I somehow lost the login password. I would appreciate any tricks you can give me
@Saul Benedict I really appreciate your reply. I found the site through google and Im waiting for the hacking stuff now. Looks like it's gonna take quite some time so I will reply here later with my results.
Notes for my future revision. RBF has only three layers: Input, Hidden and Output. Number of input node = number of variables (or features). Example: each input digit is a grid of 300, then number of input nodes is 300. Number of Hidden nodes depends on model optimisation. Each hidden node is a RBF function e.g. Gaussian, with a beta parameters. The number of variables (or dimensions) of the function is the same as the number (dimension) of the input variables. Number nodes in output layer: a) for classification = number of possible result/class b) for value estimation = one node? Unlike a multi-layer perceptron, an RBF network does not have any weights associated with the connections between the input layer and the hidden layer. Instead of weights, this layer has the RBF node’s position, width, and height. Weights are associated with the connections between the hidden layer and the output layer. As with an MLP, these weights are optimized when the RBF network is trained. The output layer nodes of an RBF network are the same as those in an MLP. They contain a combination function - usually a weighted sum of the inputs - and a transfer function that is often linear or logistic.
Dear brother, i swear your videos are the best for the topic. precise visuals, examples, crisp explanation! Keep your channel active, wanna see it grow
the best video that clearly and simply explains RBF, I have seen many videos but this one is by far the best, I learned a lot, unfortunately the channel is no longer active.
Hah, I was very impressed about by how you broke the concept down so I went link surfing and realized you now work at OpenAI. Can't say I'm surprised :)
its basically a gaussian mixture model on hidden layer with gaussian activation function(like kernel machine ). The question is how do I back propagate the mean and the variance of this gaussians. Another question is on RBF networks, sum of hidden to Output layer weights are 1 and you can estimate a Pdf with it. What makes this sum of that weights are one constraint happen. You cant have it on normal MLP
in the initial seconds of the videos... you said RNN is helpful in pattern recognization.. So do CNNs... CNNs and RNNs are somewhat similar in a way? The center of a circles need to be same as inputs (data points), is it possible to have center other than the data points?
This video saves my day! I am learning the RBF for forecasting and don't know where to start! I want to use RBF to correct the forecasting errors. Do you hav any advice of materials like books ,video or paper that can help me? Thank you so much!
cool and great tutorial. thanks for your efforts which make a Chinese student understand the contents and your standard English. please keep on introduce more neural networks to us. btw, could you illustrate the differences between some typical locally connected neural networks such as RBF, B-spline basis and the CMAC. THANK YOU IN ADVANCE.
i thought that β is the variance of the Kernel. So ,if you have lower variance - "thinner" kernel(e.g gaussian) then you can have smaller circles (with quite under smoothness yet)
I assume you preprocessed the image to center and scale them before feeding the pixels into the input layer. IMO rotation correction would be too complicated.
Thanks for the video! say I use two output neurons per class so I have a different set of weights for each neuron. When I am training (with least square method) weights between hidden and each output neurons, should I use only the observations that belong to associated output neuron? I will be glad if you can help.
Thank you Sir for this wonderful video , i have a question . How are the basis function determined in practice ? why does you choose Gaussian function as the basis function??
Hi,I really enjoyed your video; you explained it very well.I have a question at 8:00: When we are increasing beta, shouldn't the slope drop quickly as the size of the circle increases?
Haha, it's funny that you asked this question. Recently, I have been developing a neuroevolution algorithm for training "large" neural networks--something which hasn't been done yet (at least not fully). I definitely plan to make a video on neuroevolution soon in which I will hopefully (knock on wood) show off some of my new results.
macheads101 I would be interested to see how that turns out. I experimented with a genetic algorithm neural network that handles only the weights, which worked in parallel with gradient descent. I hoped the network would have an edge at breaking out of local minima and eventually reach better peak performance. I found that gradient descent did most of the work, and even the improvements made by crossing elites in the population weren't superior than continuous gradient descent with any one member. I didn't run any very long term tests, but in the tests I did run I didn't find any final error rates getting much lower than networks trained without the genetic method. I eventually want to tinker with genetically evolving topologies (NEAT, HyperNEAT), but I haven't got around to it yet. Again I would be very interested in watching that video if you do make it!
As an update, I don't think I want to make an evolution video anymore. While I was able to train some networks with evolution, the training was *way* slower than with gradient descent. I just see no practical motivation for it.
hi, machead, excellent work! just one question. how do you interpret the output of an rbfnn into probability as mlpnn with sigmoid activation does? i mean, when i train an rbfnn with multiple 0-1 target outputs, the predictions are usually real numbers varing in interval [-0.sth, +1.sth], it does not seem like probability to me.... can you comment on this?
Typically, RBF networks are "shallow", consisting of one RBF layer and then a layer of output neurons. The layer of radial basis functions is essentially the hidden layer. While I have never seen this in practice, it is theoretically possible to create "deep" RBF networks. Just imagine treating the output of one RBF network as a point (each output neuron gives one coordinate) and then feeding this point into a second RBF network. Whether or not this would be useful or easy to train is a different question.
Interesting. Often my first thought with 'neural' algorithms is how can it be made similar to a ConvNet where you have successive layers computing increasingly abstract features. So I was wondering what properties successive radial functions would have but perhaps thats a topic I currently cannot address.
Hey, can you explain to me how you learned to make use of the mnist dataset in Go? I'm looking at github.com/unixpickle/mnist/blob/master/dataset.go but I really don't understand how you're decompressing it and turning it into a go file to be used. Could you point me in the right direction of how you learned how to do that?
I used a package called go-bindata to embed the MNIST data in a Go source file. This data is compressed with gzip, so my code does the following: decompress embedded MNIST files (line 159-169) -> decode files into "images" (171-207). I suspect it is 171-207 that you are confused about. The MNIST files come in a binary format, so the code there is just dealing with the specific bytes in the MNIST file.
illusion in this video at 4:30......if you see continuously on the person then the red circles on the green plane (upper left image) vannishes....and you see full green sheet....amazing how our brain just removes the red circles....just fun comment !!!
Hi Alex! I know it's a weird idea and it's totally irrelevant to your videos, but if you are going to take the GRE test, would you consider to do a video talking about how you plan to tackle the test? ... cheers
Hi I need some suggestion Minor project ----------------------- We have taken liver patient disease as a data set, balanced it using SMOTE and applied RBF classifier and Naiver Bayers one. I don't know what to do in major projects. Any suggestions including extension of that project or any new one considering the previous projects which would at least be available on internet as there is no one to guide us Submission of topic is tomorrow Thank you
wrg, ts not interesx or not. no such thing as smoothx or not or easier or not, readyx or not or so can tx, telx/say/talk/can telx/can say can talk/confidx anyx nmw and it can all b perfx
Ths video is insanely good. You uploaded it 7 years ago, I hope you're doing well today!
By far the only well explained tutorial on RBF , thank you!!!👌
Agreed, 100%! :-)
Year 2021 - I agree with you
I know Im asking randomly but does someone know a tool to get back into an Instagram account??
I somehow lost the login password. I would appreciate any tricks you can give me
@Jamari Bishop instablaster ;)
@Saul Benedict I really appreciate your reply. I found the site through google and Im waiting for the hacking stuff now.
Looks like it's gonna take quite some time so I will reply here later with my results.
Notes for my future revision.
RBF has only three layers: Input, Hidden and Output.
Number of input node = number of variables (or features). Example: each input digit is a grid of 300, then number of input nodes is 300.
Number of Hidden nodes depends on model optimisation. Each hidden node is a RBF function e.g. Gaussian, with a beta parameters. The number of variables (or dimensions) of the function is the same as the number (dimension) of the input variables.
Number nodes in output layer:
a) for classification = number of possible result/class
b) for value estimation = one node?
Unlike a multi-layer perceptron, an RBF network does not have any weights associated with the connections between the input layer and the hidden layer. Instead of weights, this layer has the RBF node’s position, width, and height.
Weights are associated with the connections between the hidden layer and the output layer. As with an MLP, these weights are optimized when the RBF network is trained.
The output layer nodes of an RBF network are the same as those in an MLP. They contain a combination function - usually a weighted sum of the inputs - and a transfer function that is often linear or logistic.
Thank uuu!
As many people mentioned, you did a great job explaining this with minimal amount of complexity to grasp the concept and ivestigate further!
You just out teached the hell out of all my graduation teachers in english (that is not even my primary language). Thanks man!
Dear brother, i swear your videos are the best for the topic. precise visuals, examples, crisp explanation! Keep your channel active, wanna see it grow
Extremely clear explanation. You are a very smart teacher. Thank you.
no one, and I mean no one explain this like you do, thanks, thanks a lot, thankyou very much
Just such a good explanation, by far the simplest and most complete I've seen out of several videos - thank you!
the best video that clearly and simply explains RBF, I have seen many videos but this one is by far the best, I learned a lot, unfortunately the channel is no longer active.
very clear explanation. So far the best video of RBF networks on internet. Thankyou !
One of the best explanations for RBF. I tried understanding them from several texts but this one is crystal Clear.
I am blessed to be able to hear from you
I bet you are a master at MATH in general, because some one who understands it can simplify it... thats why this video is so easy to understand
I wish we continued his videos. Great job!
What a nice explanation, of a complex theme. Thanks for sharing your knowledge.
What a great video! Thank you for the easy and visualized explanations!
ARMYyyyyyyy....💜😅
Great Explanation....Far more better than many other explanations...Thank U....It helped
Thank you,
I am learning about, and in short time its good to learn about.
have a good day man
Great video, I was just struggling with RBFs, but your video just made them much more understandable, thanks
Glad I could help!
Great video, its much more easier than i got in uni. Keep going!
Great explanation! You did a great job breaking down these complicated ideas.
macheads101 , Thank you bro. YOU made Everything simple & demystified.
excellent explanation. The complexity of the algorithm is simplified. Thank you.
So many kudos to you. It goes to show that it doesn't matter how fancy your video is if your explanation is trash. Well done!
Thank you very much for this video! I learnt a lot and am thankful for the good use of slides as well! Great work!
Hah, I was very impressed about by how you broke the concept down so I went link surfing and realized you now work at OpenAI. Can't say I'm surprised :)
Excellent tutorial! Finally found a video that's easy to understand. Thanks a lot!
Your explanations are very clear. Thanks.
Very well explained with great intuitive motivations.
Great explanation, that's real didactics. Thank you very much!
its basically a gaussian mixture model on hidden layer with gaussian activation function(like kernel machine ). The question is how do I back propagate the mean and the variance of this gaussians. Another question is on RBF networks, sum of hidden to Output layer weights are 1 and you can estimate a Pdf with it. What makes this sum of that weights are one constraint happen. You cant have it on normal MLP
Greatt
You have done it, brother. I hope you are or will become successful.
you gonna fire this youtube platform...wow!!!! blast man!
You are better than my teacher
So clear an in-depth explanation thank you.
Thankyouuu so muchhhhh this video is a gem for a beginner, basically cleared all my doubts! ❤️
Super helpful! Please continue
Great step by step explanation, thank you!
Very well explained and the diagrams are helpful
in the initial seconds of the videos... you said RNN is helpful in pattern recognization.. So do CNNs... CNNs and RNNs are somewhat similar in a way?
The center of a circles need to be same as inputs (data points), is it possible to have center other than the data points?
This video saves my day! I am learning the RBF for forecasting and don't know where to start! I want to use RBF to correct the forecasting errors. Do you hav any advice of materials like books ,video or paper that can help me? Thank you so much!
Really helpful video. Thanks Macheads101
cool and great tutorial. thanks for your efforts which make a Chinese student understand the contents and your standard English. please keep on introduce more neural networks to us. btw, could you illustrate the differences between some typical locally connected neural networks such as RBF, B-spline basis and the CMAC. THANK YOU IN ADVANCE.
Thanks for your useful video, I wanna know about the method that we can correct the weights? Do we drive RBF to correct the weights? How?
i thought that β is the variance of the Kernel. So ,if you have lower variance - "thinner" kernel(e.g gaussian) then you can have smaller circles (with quite under smoothness yet)
I assume you preprocessed the image to center and scale them before feeding the pixels into the input layer. IMO rotation correction would be too complicated.
If i use k-means to find centers, so i just need to train the output neurons?
Thanks for the video! say I use two output neurons per class so I have a different set of weights for each neuron. When I am training (with least square method) weights between hidden and each output neurons, should I use only the observations that belong to associated output neuron? I will be glad if you can help.
Thanks for this very beautiful explanation. Can you please make a video on use of RBFN for solutions to partial differential equations
Thank you Sir for this wonderful video , i have a question . How are the basis function determined in practice ? why does you choose Gaussian function as the basis function??
I may have missed it, but do you talk about how to determine beta? Nice explanation btw, best video I've seen on RBF.
So, if I use RBF, I can do clustering and classification?
you are awesome you explain really good and smooth . tnx
Hi,I really enjoyed your video; you explained it very well.I have a question at 8:00: When we are increasing beta, shouldn't the slope drop quickly as the size of the circle increases?
Can you make a video on Neuroevolution and explain how genetic algorithms work? And nice video.
Haha, it's funny that you asked this question. Recently, I have been developing a neuroevolution algorithm for training "large" neural networks--something which hasn't been done yet (at least not fully). I definitely plan to make a video on neuroevolution soon in which I will hopefully (knock on wood) show off some of my new results.
macheads101 I would be interested to see how that turns out. I experimented with a genetic algorithm neural network that handles only the weights, which worked in parallel with gradient descent. I hoped the network would have an edge at breaking out of local minima and eventually reach better peak performance. I found that gradient descent did most of the work, and even the improvements made by crossing elites in the population weren't superior than continuous gradient descent with any one member. I didn't run any very long term tests, but in the tests I did run I didn't find any final error rates getting much lower than networks trained without the genetic method.
I eventually want to tinker with genetically evolving topologies (NEAT, HyperNEAT), but I haven't got around to it yet. Again I would be very interested in watching that video if you do make it!
As an update, I don't think I want to make an evolution video anymore. While I was able to train some networks with evolution, the training was *way* slower than with gradient descent. I just see no practical motivation for it.
Hi Macheads101; I wanted to know if we may look up the source code for the Handwritting demo, to modify it for use in more general purposes?
Thank you so much for the explanation.
hi, machead, excellent work! just one question. how do you interpret the output of an rbfnn into probability as mlpnn with sigmoid activation does? i mean, when i train an rbfnn with multiple 0-1 target outputs, the predictions are usually real numbers varing in interval [-0.sth, +1.sth], it does not seem like probability to me.... can you comment on this?
Muchas gracias por tu explicación.. Muy buen video.. Felicitaciones
Would rotation and scaling of the original figure improve the accuracy of prediction?
Are RBF networks typically a single layer as shown in the video? how would multiple layers or the concept of a hidden layer work for an RBF network?
Typically, RBF networks are "shallow", consisting of one RBF layer and then a layer of output neurons. The layer of radial basis functions is essentially the hidden layer.
While I have never seen this in practice, it is theoretically possible to create "deep" RBF networks. Just imagine treating the output of one RBF network as a point (each output neuron gives one coordinate) and then feeding this point into a second RBF network. Whether or not this would be useful or easy to train is a different question.
Interesting. Often my first thought with 'neural' algorithms is how can it be made similar to a ConvNet where you have successive layers computing increasingly abstract features. So I was wondering what properties successive radial functions would have but perhaps thats a topic I currently cannot address.
I have no one else to ask, I tried to implement it, but it is not learning. Not sure why.
Any tips about possible pitfalls?
Great video as always long time fan
Amazing explanation. Thanks!
With which software/tool did you capture your hand digit writing ?
Really great explanatory video!!
Hey, can you explain to me how you learned to make use of the mnist dataset in Go? I'm looking at github.com/unixpickle/mnist/blob/master/dataset.go but I really don't understand how you're decompressing it and turning it into a go file to be used. Could you point me in the right direction of how you learned how to do that?
I used a package called go-bindata to embed the MNIST data in a Go source file. This data is compressed with gzip, so my code does the following: decompress embedded MNIST files (line 159-169) -> decode files into "images" (171-207). I suspect it is 171-207 that you are confused about. The MNIST files come in a binary format, so the code there is just dealing with the specific bytes in the MNIST file.
Thank you so much, you're honestly the most helpful youtuber and active in replies. I hope your channel does well
thanks a lot! very nice and clear explanation
illusion in this video at 4:30......if you see continuously on the person then the red circles on the green plane (upper left image) vannishes....and you see full green sheet....amazing how our brain just removes the red circles....just fun comment !!!
Hi Alex! I know it's a weird idea and it's totally irrelevant to your videos, but if you are going to take the GRE test, would you consider to do a video talking about how you plan to tackle the test? ... cheers
Great explanation! Thank you
Nicely explained, thanks.
PER ITALIANI: Ho realizzato una playlist sulle reti RBF! :) quì -> th-cam.com/video/fcBz-3NchCI/w-d-xo.html
you made it so simple thanks
hi! I anjoyed the video, really helped me get a better understanding of RBF networks.
can you explain how to code a RBF in matlab?
lol
Great video. Dont delete.
Great explanation!
Thanks, great explanation :)
thanks for your explanation
do you mean the RBF like an activation function or what?
Hey! I need your help regarding one of the RBF programs. Could you please help me?
Nice explanation
Thanks a lot. BTW, have they ever told you you resemble John Lennon?
Hi
I need some suggestion
Minor project
-----------------------
We have taken liver patient disease as a data set, balanced it using SMOTE and applied RBF classifier and Naiver Bayers one.
I don't know what to do in major projects. Any suggestions including extension of that project or any new one considering the previous projects which would at least be available on internet
as there is no one to guide us
Submission of topic is tomorrow
Thank you
Excellent, thank you
Just so good!
Hi, could you make a video about how to see or control someones mac? from a mac.. plzz
Thanks this was helpful
excellent video
legend. Thank You.
thank you great stuff
Thank you!
Amazing!!
Thank you
damn. you're good
Higher dimensions
wrg, ts not interesx or not. no such thing as smoothx or not or easier or not, readyx or not or so can tx, telx/say/talk/can telx/can say can talk/confidx anyx nmw and it can all b perfx
Amazing explanation! Thanks a loads!