I'm kind of late to the party, but I've been trying to get my mind around pymc3 for a couple of days and am finding it hard going. I think there are a few reasons for this: 1) There's a lot of assumed knowledge about properties of statistical distributions that engineers like me may not have 2) Unlike PyTorch, for example, I can't just drop a pdb.set_trace into the forward method and inspect what's going on. The optimization is a lot less transparent than with deep learning. 3) I've been using python for a long time and I've never seen a class instantiated in a context manager before
Thanks! Cool talk. I think the prior for the beta has a prior standard deviation of 1.0 (see above in the notebook), rather than 0. That is why the beta output here (th-cam.com/video/RrND0VQhDXY/w-d-xo.html) is slightly greater, rather than lower, than 0.25.
Great intro , yet I still don't know how PP would augument the performance of say a machine learning model applied to data which changes very frequently over time ! Am aware of the fact that most ML models tend not to perform in case of non-stationary time series data ! So how PP could help in that matter , I still can't see how this using PP - based algos help detect changes in distributions given of course that they exist at the first place ! Is there any hope using techniques from DSP ? Your input is highly appreciated!
I'm kind of late to the party, but I've been trying to get my mind around pymc3 for a couple of days and am finding it hard going.
I think there are a few reasons for this:
1) There's a lot of assumed knowledge about properties of statistical distributions that engineers like me may not have
2) Unlike PyTorch, for example, I can't just drop a pdb.set_trace into the forward method and inspect what's going on. The optimization is a lot less transparent than with deep learning.
3) I've been using python for a long time and I've never seen a class instantiated in a context manager before
Google needs to work on their recommendations engine because this should have been recommended long time ago.
great introductionary talk!
Thanks! Cool talk. I think the prior for the beta has a prior standard deviation of 1.0 (see above in the notebook), rather than 0. That is why the beta output here (th-cam.com/video/RrND0VQhDXY/w-d-xo.html) is slightly greater, rather than lower, than 0.25.
Great intro , yet I still don't know how PP would augument the performance of say a machine learning model applied to data which changes very frequently over time ! Am aware of the fact that most ML models tend not to perform in case of non-stationary time series data ! So how PP could help in that matter , I still can't see how this using PP - based algos help detect changes in distributions given of course that they exist at the first place ! Is there any hope using techniques from DSP ? Your input is highly appreciated!
where can I get the slices and the notebook?
thankyou
Links to the notebook are shown at 1:04:00, and earlier.
hello!
Perfect woman