The generalized least squares estimate is a particular function of the data, so it can always be used when it can be computed. When the distribution is normal, it happens to be the maximum likelihood estimate, which gives it nice properties. See the Gauss-Markov theorem for additional properties en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem
Thank you so much for your explanation, i wonder in terms of not showing residual maximum likelihood estimate of sigma (theta).
When are maximum likelihood and REML used? Based on what we should use between these 2 methods? Can you explain the proof of the formula?
Thx so much for the video! I have a question: does the distribution has to be normal to derive the generalized least squared estimate?
The generalized least squares estimate is a particular function of the data, so it can always be used when it can be computed. When the distribution is normal, it happens to be the maximum likelihood estimate, which gives it nice properties. See the Gauss-Markov theorem for additional properties
en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem
Nice