Awesome video. The example you showed seems to match very closely with the types of problems that you would solve with a KDE. I think a lot of what people use normalizing flows for right now is learning an invertible mapping from a source distribution to a sink distribution. Could you shed some light on how you could use this normalizing flow model you built to generate new samples of data?
I have a potentially stupid question, but I'm just a drop out who's learning on their own trying to make it lol........ anyways.... would it be reasonable to think of a normalizing flow model similar to a stochastic optimal control problem, such as using the hamilton jacobi bellman equation... for say, defining a dynamic strategy in a financial market, and defining its behavior to be relative to the distributions the market creates as it plays out, and essentially learn the optimal policy such that as the distribution of the market evolves, our relative execution evolves similarly, such that the end result of our execution will resemble the p(z) we defined to be desirable, and we got there by transforming the markets dists p(x). Does that sound anything close to ballpark? Or would it actually be more like, setting up the model as a normalizing flows model, and using a hamilton jacobi bellman set up to optimize and train said model?
So i went back to the beginning and I'm gonna try this again. So basically here is the idea.... we have some desired distribution that we want and/or we want to use, and that is p(z) ...... and then using the math, and using our data, we can run our data thru the math, and find those CDFs needed to transform the data, based on our described distribution, into some other distribution p(x), which does two things, 1 it now gives us a more "true" distribution for how the actual process were trying to model, and 2 it gives a translated way to interpret that distribution, as it being related to the p(z) we defined thru the CDF transformations. But now I have another rabbit hole to go down, cause it seems like another method for a copula or something lol
I have to believe that a super complex version of this in 8 dimensions using spinor fields is what James Simons figured out way way back
wow what a explanation and visualization.....simply outstanding !
Amazing man! Awesome!
Thank your Volt for this class.
this guy is genius
But how do you inverse the neural network? like how do you get x from an output y using a combination of multiple cdfs
What a wonderful video!
great video, Thanks a lot
That is a great tutorial. Do you have the code uploaded anywhere?
I also wonder how to generate samples x, from the uniform z
Awesome video. The example you showed seems to match very closely with the types of problems that you would solve with a KDE. I think a lot of what people use normalizing flows for right now is learning an invertible mapping from a source distribution to a sink distribution. Could you shed some light on how you could use this normalizing flow model you built to generate new samples of data?
Well done.
Thank you
I have a potentially stupid question, but I'm just a drop out who's learning on their own trying to make it lol........ anyways.... would it be reasonable to think of a normalizing flow model similar to a stochastic optimal control problem, such as using the hamilton jacobi bellman equation... for say, defining a dynamic strategy in a financial market, and defining its behavior to be relative to the distributions the market creates as it plays out, and essentially learn the optimal policy such that as the distribution of the market evolves, our relative execution evolves similarly, such that the end result of our execution will resemble the p(z) we defined to be desirable, and we got there by transforming the markets dists p(x). Does that sound anything close to ballpark? Or would it actually be more like, setting up the model as a normalizing flows model, and using a hamilton jacobi bellman set up to optimize and train said model?
So i went back to the beginning and I'm gonna try this again.
So basically here is the idea.... we have some desired distribution that we want and/or we want to use, and that is p(z) ...... and then using the math, and using our data, we can run our data thru the math, and find those CDFs needed to transform the data, based on our described distribution, into some other distribution p(x), which does two things, 1 it now gives us a more "true" distribution for how the actual process were trying to model, and 2 it gives a translated way to interpret that distribution, as it being related to the p(z) we defined thru the CDF transformations.
But now I have another rabbit hole to go down, cause it seems like another method for a copula or something lol
2:20 Where is the uniform distribution?
Do you recommend PyTorch for probabilistic programming?
I tried Pyro a long time ago and found it pretty easy to use. Also starting to like JAX more and more recently.
tensorflow probabilities is pretty dope too tbh
Excelllent
good description but the background sound is really distractive and annoying.
Classic
Please read out the equations and explain what they mean in plain English. And remove the distracting music.