When making predictions and drawing samples for phi_1, phi_2 and sigma, one should sample an index and take all parameter values with the same index. This is because the parameter posteriors may be correlated and the separate sampling may actually produce parameter value combinations that are very unlikely.
In other words, our chain of posterior samples is a set of 3D points. If each dimension is drawn separately we have the right mean and variance for each dimension (called the marginal distribution), but the 3D structure discovered by the NUTS sampler is lost
I wonder if the noise should be added, as it was, when forecasting. After all doing so introduces noise in the forecast. For some purposes this may be desirable, for others not.
For ongoing support, please consider Joining and becoming a Channel Member:
th-cam.com/channels/QwQVlIkbalDzmMnr-0tRhw.htmljoin
Thank you!
Great video, very insightful and very practically taught. I'd be keen to see this extended to ARMA/ARIMA models.
This is a really well structured video.
Thanks!
When making predictions and drawing samples for phi_1, phi_2 and sigma, one should sample an index and take all parameter values with the same index. This is because the parameter posteriors may be correlated and the separate sampling may actually produce parameter value combinations that are very unlikely.
In other words, our chain of posterior samples is a set of 3D points. If each dimension is drawn separately we have the right mean and variance for each dimension (called the marginal distribution), but the 3D structure discovered by the NUTS sampler is lost
Thanks for posting this valuable insight.
I did not know until I saw this that Turing has a predict function.
I wonder if the noise should be added, as it was, when forecasting. After all doing so introduces noise in the forecast. For some purposes this may be desirable, for others not.
bark bark bark bark bark bark!
Fractional EDPs?