Thank you for your detailed explanation of this method. After learning this method, I tried to write a code to do this example that you introduced here. As you rightly mentioned, initial guess is very important, particularly for c2. By looking at the data, we can have an idea of c2, that is related to the period of the cosine curve. If c2 is guessed too low or too high, the algorithm won't converge, it must be close to the true value...Thank you again for your great video. Educational, detailed, and very helpful.
The more parameters, the more computationally expensive it is, but the method will work with any number of parameters. Ideally, you use as few as possible to get a good fit.
If you have a poorly behaved system, it's definitely possible for the initial guess to affect convergence. The simplest method of guessing good initial values is to do a brute force grid search over possible values, then choose a vector that results in a low residual as your starting guess. This method may prove intractable for systems with large numbers of parameters.
c1, c2, c3 are unknown, we have an initial guess for them then we iteratively improve c1, c2, c3 by adding dc1, dc2, dc3. The notation could be more elaborate by giving them an iteration index in the superscript
You explanined in 10 min what my proffesor could't in two weeks
Thank you for your detailed explanation of this method. After learning this method, I tried to write a code to do this example that you introduced here. As you rightly mentioned, initial guess is very important, particularly for c2. By looking at the data, we can have an idea of c2, that is related to the period of the cosine curve. If c2 is guessed too low or too high, the algorithm won't converge, it must be close to the true value...Thank you again for your great video. Educational, detailed, and very helpful.
What a good explanation? Really got the point Prof.
Sir, you are amazing! I wonder how you would explain the Gauss-Newton and Levenberg-Marquardt algorithms as well. Thank you!
thank you man, that was an excellent explanation
I'm not sure if this is a typo but you write "t" instead of "x" consistently throughout the video.
You have y(x) yet x is never in equation on the right side ?
I think he called it "t" later on, he got confused
Well explained! Thanks
Thank you so much! Can you please explain more about Gauss-Newton and Levenberg-Marquardt algorithms?
That was very helpful ; Thankyou very much,,,
how does iteration work for this?
Love the videos!
This was really helpful sir. Thank you :-)
1:16 x not t.
Really cool!
hey there. great video. just wondering how many parameters should we put? does it really matter no matter how many parameter we put in?
The more parameters, the more computationally expensive it is, but the method will work with any number of parameters. Ideally, you use as few as possible to get a good fit.
How do you make initial guess for the Ci prime vector (3:40)? Do these values affect the solution convergence? Thank you.
If you have a poorly behaved system, it's definitely possible for the initial guess to affect convergence. The simplest method of guessing good initial values is to do a brute force grid search over possible values, then choose a vector that results in a low residual as your starting guess. This method may prove intractable for systems with large numbers of parameters.
You are Amazing !
ZERO-X please prof. How do I get the real value of c from the calculated changes in c. Will I add or subtract
Thank you!!
Can you do this for a function with 2 variables e.g f(x,y;a,b) and g(x,y;c,d)
Is c1,c2 and c3 are known variable?
c1, c2, c3 are unknown, we have an initial guess for them then we iteratively improve c1, c2, c3 by adding dc1, dc2, dc3.
The notation could be more elaborate by giving them an iteration index in the superscript
@@rvoros what is dc1, dc2 etc?? i mean how much change should be added to the previous parameter..
İf you know these variables, you cannot fit curve. Because , you will have one curve
Nice Aggie Ring! Thanks & Gig em AERO 23'
isn't this just gradient descent?