nonlinear regression parameter confidence intervals

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ส.ค. 2024

ความคิดเห็น • 45

  • @n-context
    @n-context 3 ปีที่แล้ว +6

    Great! The first understandable explanation of the confidence intervals for the nonlinear fit parameters I’ve heard/seen. Much better than my math classes! Thanks a lot for the video

  • @iii9981
    @iii9981 2 ปีที่แล้ว +1

    Your explanation really makes me cry, I have sought this for a very long time.

  • @jazbelmejor
    @jazbelmejor 10 หลายเดือนก่อน

    you are the goat

  • @sjwang3892
    @sjwang3892 3 ปีที่แล้ว +2

    How come the video only has

    • @spurvevej42
      @spurvevej42  3 ปีที่แล้ว

      Thanks SJ Wang - spread the word :-)

  • @rajschneider9857
    @rajschneider9857 ปีที่แล้ว

    Thanks so much. Great explanation and demonstration. I have searched for a long time but never found any tutorial that good!

    • @spurvevej42
      @spurvevej42  ปีที่แล้ว

      You're welcome Raj! Remember to subscribe 🙂

  • @TheGibberingGoblin
    @TheGibberingGoblin ปีที่แล้ว

    Thankyouuuu

  • @mohammadkhezri6877
    @mohammadkhezri6877 4 หลายเดือนก่อน

    It was so useful, thanks🎉

    • @spurvevej42
      @spurvevej42  3 หลายเดือนก่อน

      Thanks, Muhammad, much appreciated. Do subscribe :-)

  • @leveneursebastien
    @leveneursebastien ปีที่แล้ว

    I've spent hours on internet to find a good explaination...Thanks

    • @spurvevej42
      @spurvevej42  ปีที่แล้ว +1

      You're welcome, do subscribe 👈🙂

  • @dasithasela3052
    @dasithasela3052 2 ปีที่แล้ว

    This was a really informative and easy-to-understand explanation! Thank you very much!

    • @spurvevej42
      @spurvevej42  2 ปีที่แล้ว

      You're welcome, please subscribe 🙂

  • @none1979none
    @none1979none 3 ปีที่แล้ว

    Thank you Christian. It helps me a lot. It much better than reading the textbook :D

  • @TheDrpizzaman
    @TheDrpizzaman หลายเดือนก่อน

    What a great video - thank you so much for being such a good teacher.
    I have read that in non-linear regression, SS_regression + SS_error does not equal SS_total (so very different compared to linear regression). I wonder if you could explain why, and point me to some good reading for this?

    • @spurvevej42
      @spurvevej42  หลายเดือนก่อน

      Thanks for your kind feedback. You are right that using R^2 for non-linear models is errorprone. However, most statistical software packages report R^2 even for nonlinear regression.Check statisticsbyjim.com/regression/r-squared-invalid-nonlinear-regression/ statisticsbyjim.com/regression/r-squared-invalid-nonlinear-regression/ . In any case, you should never rely solely on R^2 as a measure of the goodness of fit of your model, although there is a tendency to do so 'out there'. Parameters like R^2-adjusted, standard error if regression and the statistical significance of your regression parameters are equally important in this respect. Remember to subscribe 😊

  • @mightbin
    @mightbin 11 หลายเดือนก่อน

    Great tutorial , but saw somewhere that " Modern software estimates parameter confidence intervals based on asymptotic standard errors calculated by nonlinear optimizers such as Marquardt-Levenberg,Used by SAS, R, Graphpad, Mathematica, and more" , so what is the difference between the method you taught here and the " asymptotic standard error ...of eg. 4PL curve" ? Thank you very much.

  • @AhmedThahir2002
    @AhmedThahir2002 8 หลายเดือนก่อน

    Hi there! This was an amazing video!
    How would I estimate the confidence interval for the prediction in this case?

  • @spurvevej42
    @spurvevej42  3 ปีที่แล้ว

    Glad you found it useful

    • @Theeloberto
      @Theeloberto 3 ปีที่แล้ว

      Hello there. Can I ask about some sources or literature where this process is described? I am using it in my bachalor thesis for calculation of 95%CI of regression coeficients (found by optimalisation methods). It is working fine but it would be nice if I can reference it more than "some smart guy on YT". Thank you and have a nice day!

  • @acidithiobacillusferrooxid3687
    @acidithiobacillusferrooxid3687 2 ปีที่แล้ว

    Thank you for this vid, this helped me a lot during my paper writing!

    • @spurvevej42
      @spurvevej42  2 ปีที่แล้ว

      You're welcome, please subscribe 👈🙂

    • @acidithiobacillusferrooxid3687
      @acidithiobacillusferrooxid3687 2 ปีที่แล้ว

      @@spurvevej42 I came back with a question: in my calculation I got a couple negative values on the diagonal of the final, inversed matrix, and this leads to imaginary values when taken squreroots, could you maybe elaborate on what this is supposed to mean statistically? Thanks!

  • @hangyuyu
    @hangyuyu ปีที่แล้ว

    Thanks very much for the clear explanation. Is it possible to tell me which book I should read to learn more about parameter estimation?

    • @spurvevej42
      @spurvevej42  ปีที่แล้ว

      You are welcome Hangyu. Rememeber to subscribe 🙂 I believe you can find good ressources on the web for starters. E.g. www.google.com/url?sa=t&source=web&rct=j&url=www.coursehero.com/file/24707059/261ALecture03pdf/&ved=2ahUKEwi3qf2v4Yr9AhVriP0HHYsXBioQFnoECBIQAQ&usg=AOvVaw1le6ujYElPqfckBDQZO172

  • @davidtrindade7662
    @davidtrindade7662 3 ปีที่แล้ว

    Thanks for the clear explanation. Do you have a similar video that explains the calculation for nonlinear regression individual prediction intervals?

    • @spurvevej42
      @spurvevej42  3 ปีที่แล้ว

      I am sorry I don't but it is kind of the same approach. The primary difference is how to calculate the standard error of the predicted value (i.e the halfsize of the PI). It goes like this: PI: [y-t*se(y);y+t*se(y)] where y is the estimated response, t is the t-quantile (same as for the CI) and se(y) = sqrt( sigma_hat^2 + se(mu)^2 ) , where se(mu) is the standard error of the mean. I'll probably put up a video some time in April covering this topic.

  • @andikabayuaji9890
    @andikabayuaji9890 3 ปีที่แล้ว

    Thank you, Christian. Can I have your opinion on bootstrap method compared to using jacobian matrix to derive parameter confidence interval?

  • @koenvandemoortel2206
    @koenvandemoortel2206 3 ปีที่แล้ว

    Thanks for this explanation. Now, this way you can find out nicely how sensitive/robust your parameters are, in the assumption that (1) the data are 'precise' and (2) the model is 'precise'.
    In order to take into account that the data are never 100% precise, I propose the following: add some noise to the data, with the same amplitude as the precision of the measurements, re-iterate the parameters, and do this, say, 100 times. Calculate the standard deviation for each of the obtained parameters and this will give you a realistic estimation of their confidence intervals, taking into account the real data.
    Could you agree with this method or am I missing something?

    • @spurvevej42
      @spurvevej42  3 ปีที่แล้ว

      Sampling is certainly an interesting alternative to my approach - thanks for contributing

    • @koenvandemoortel2206
      @koenvandemoortel2206 3 ปีที่แล้ว

      @@spurvevej42 Thanks. Are you interested in testing the beta-version of my iterative regression software?

  • @MauroTambasco
    @MauroTambasco ปีที่แล้ว

    Thank you for this explanation! Can you tell me what this method is called?

    • @spurvevej42
      @spurvevej42  ปีที่แล้ว

      Hi Mauro, glad you found it useful. It is simply the method if least squares invented by Gauss hundreds of years ago. Remember to subscribe 🙂

    • @MauroTambasco
      @MauroTambasco ปีที่แล้ว

      @@spurvevej42 Thanks for you quick reply. Sorry, I don't think my question was specific enough. What I was trying to ask is if the method you demonstrate to compute the confidence intervals of the parameters of a non-linear regression has a name.

    • @spurvevej42
      @spurvevej42  ปีที่แล้ว

      I am not familiar with his calculation having a name per se. It is based on a linearization of the non-linear regression at the point at which one wishes to compute the CI. As such, one could refer to it as a sort of "error propagation".

    • @dingodingding
      @dingodingding ปีที่แล้ว

      ​@@MauroTambasco This is known as the Wald CI or asymptotic approximate symmetrical CI. It is only valid if the parameter estimates behave similar to those for a linear model: normally distributed, minimum variance, and unbiased. This is not always the case with nonlinear regression but approximately holds when the sample size is very large. The preferred method for nonlinear regression is to calculate the profile likelihood CI.

    • @iii9981
      @iii9981 9 หลายเดือนก่อน

      @@dingodingding may I know in which book or reference this method is introduced?

  • @yohanesnuwara1959
    @yohanesnuwara1959 3 ปีที่แล้ว

    Hi Christian, is the t-statistics also valid for lots of number of observations, e.g. 100 (98 degree of freedom) ? Or should we use the z-statistics?

    • @spurvevej42
      @spurvevej42  3 ปีที่แล้ว

      Hi Yohanes, yes the t-statistic is valid for as many obeservations you like. However, as that number increases, our variance estimate becomes better and better (approaches the true variance) and we may therefore use the Z-statistic instead, pretending that we know the true variance. A rule of thumb is that when n>30, you may replace t by Z

  • @Theeloberto
    @Theeloberto 3 ปีที่แล้ว

    Good morning sir, I use this method to do some calculations in my bachalor thesis. Its working fine and giving reasonable resultats. Can I ask you about some source/literature where this is mentioned? In some statistical books I found the linear regression part with design matrix etc. But in those books I strugle to find any mention about nonlinear, substituing design matrix X by jakobian J, etc. Could you please provide me with some literal source which support method in your video? Those calculations are pretty crucial for my thesis and it would be nice if I could reference to something else than "smart guy on YT". TY very much, good work on video.

    • @spurvevej42
      @spurvevej42  3 ปีที่แล้ว

      Good afternoon Mr. Oberta, I think this paper stat.ethz.ch/~stahel/courses/cheming/nlreg10E.pdf gives a fine introduction to the subject. Enjoy!

  • @emirhadzic4996
    @emirhadzic4996 3 ปีที่แล้ว

    could i ask why t_0.975 and not 0.95 I implemented your idea, but It's been a while since I actually took any statistics course.

    • @yohanesnuwara1959
      @yohanesnuwara1959 3 ปีที่แล้ว +3

      It's called the two-tailed Gaussian distribution. The (1-0.95)=0.5 is divided by 2, equals 0.025. So, the t-statistics becomes (1-0.025)=0.975