thanks for the amazing video. do you have an idea how to calculate the composite likehood with pairwise method over all constituents, meaning over all neighboring paris of assets to estimate the two dcc parameters alpha and beta? i like to understand how to calculate on a big data set like you have done with the complete s&P 500. Pakel et al. (2020) - Fitting Vast Dimensional Time-Varying Covariance Models suggets to do that with pairwise likelihood. do have have an idea how to implement it in your code?
Hi, just wanted to say thank you so much and found this video v helpful when searching for 'conditional probability'. Happy accident that lead to a great channel for quant finance and physical science research. Cheers! 🥂
why computing the garch volatility you multiply that cond vola by 1600? in this part of code: 'garch_vol_df = pd.concat([pd.DataFrame(model_parameters[x].conditional_volatility/100)*1600 for x in model_parameters], axis=1)'
Hey.. very helpful video.. can you help me the code of DCC-MIDAS-X Model which is somewhat similar to this only.. i am unable to incorporate this model.. please help
Thanks very helpful video. Could you please suggest how can exogenous variable X be incorporated into DCC garch model to see the impact of X on correlation of different assets
Glad you liked the video. Does the exogenous variable have a correlation with GARCH characteristics? You can’t just throw anything in there as it won’t converge. Just add it as another asset and see what it looks like. No problem in doing that. Remember we are looking at correlation of VOLATILITY, not returns.
@@dirtyquant Thanks for replying X has garch characteristics i want to check whether variable X impact the dynamic conditional correlation of two assets..like if it has positive or negative effect
It won’t have any effect on the other 2 variables. But you can look at the time series of the DCC and I find that useful. Let me know how you get on. You can post your findings on dirtyquant.com
Thanks for the video! Just wanted to ask if in your opinion does make sense to use this to model implied volatilities, or might be better to use a multivariate OU process (with log(x)). Thanks!
Hi! First of all many thanks for your video. You clarified a lot of stuff that is a necessity for my thesis. I do run into an issue. I want to use the student's skew t distribution for my DCC-GARCH model. I have adjusted dist to 'skewt'. In the function garch_t_to_u. Also, i have added eta and lambda as parameters. Then, udata = t.cdf(std_rest, lambda). However, now opt.out.success provides a False and I do not have any estimations. Could you please let me know how I can adjust the code to use the skewed T distribution? If I make mistakes in my adjustments as explained above please let me know. Many thanks in advance!
There is no reason why you have to use GARCH. If you omit that step and just feed in returns, it should work. I think the issue is that returns are a lot more unstable compared to vol.
@@dirtyquant There are no tutorial on DECO ever... Would you like to help me? stats.stackexchange.com/questions/534509/compute-conditional-equicorrelation-matrix-deco-garch-from-dcc-garch-estimatio
I understand the basics, and could use this to interpret past data. But I am struggling conceptually with applying DCC to a simulation/forecast model. Do you have any recommended follow-up reading?
I am planning on doing a video on it, but look at my response here: dirtyquant.com/t/video-cholesky-decomposition-take-your-backtesting-to-the-next-level/127/4
Thanks a lot for your videos! Is there a way to change your code to estimate the DCC considering the GARCH with a normal innovation density, instead of a t-distribution? I am really struggling to understand which parameters should I extract from the results of the GARCH to then get the uniform distribution.
Very easy. Instead of udata = t.cdf(std_res, nu) Just use the normal CDF And instead of arch_model(rets[x], dist = 't') Just change that to normal. Done!
@@dirtyquant thanks a lot for your kind reply, you are really making my last Master assignment a lot easier! Could you also tell which parameters should I put in the norm cdf? I mean how do I get the mean and std from the fitted GARCH model. (Amazing channel really)
@@dirtyquant I see! So I just need to get the standardized residuals out of it, right? Any idea on how to do it? I tried to get the same object (std_res) from the fitted garch model with normal, but the object does not exist.
Because everything is in a function, and hence a local variable. It won’t appear in your list of variables. I am travelling but will give you a working example later. Try solve it yourself, you will learn a lot
Hey Rocky, if you look at page 135 (106 on top right) of this doc you will see the log like function for normal copula: ses.library.usyd.edu.au/bitstream/handle/2123/14728/2016_Christian_Contino_Thesis.pdf?sequence=2&isAllowed=y
@@dirtyquant Thank you for sharing your work. Very appreciate it. I do have a question. What if I would like to change from Multivariate Gaussian distributed errors to Multivariate t-distributed errors, how do I change the code?
Thanks. Very helpful! I learn a lot from your code. I want to know about how to compute standard errors to make inferences about estimates in DCC. Could you add how to compute standard errors to your python code, please?
Hey, glad you liked the video and code. So the errors around the estimates really depend on your optimiser, which will give you estimates around the parameters. If you use MCMC, then you can just look at the chain and infer the errors from there.
I will do a video at some point, but look at this discussion, as I link to my paper on how to do it: dirtyquant.com/t/video-cholesky-decomposition-take-your-backtesting-to-the-next-level/127/3
Hi , it's a really helpful video. Really appreciated :)) I have tried to walked through your code. Just have a question. If I wanna get the DCC correlation of the stocks do I just extract the Rt value? Thanks in advance.😁
@@dirtyquant Hi Dirtyquant, Thanks so much for getting beck so quickly! Is there any way that I can modify the code to get the return correlation. I tried input return instead of udata_list but the model shows not success.
The model is (likely) not going to converge if you use returns as they as far too unstable. You need to convert the returns to a uniform distribution using the appropriate CDF for that distribution. Again, I don’t think it will work as the autocorrelation of returns decays far too quickly so it will be incredibly noisy. Please try it and report back. :-)
It was really a little complicated to me to calculate DCC and CCC (Constant Conditional Correlation) in Excel. I had to use a LGARCHMLE (leveraged model) with American Indexes: Nasdaq, S&P500 and Dow Jones. However, it's gonna be very difficult to calculate it in Python. I heard that R programming is very easy to estimate it. I didn't know that DCC can be calculated in Python too. What about CCC? Your code looks complicated. Can you share it? Regards.
I've managed to get through to %time opt_out = minimize(loglike_norm_dcc_copula, np.array([0.01, 0.95]), args = (udata_list,), bounds=bnds, constraints=cons), however, I keep on getting --> ValueError: not enough values to unpack (expected 2, got 1). Need some help!!
Interesting video... thanks for sharing... loved it although not a fan of the background music.. would be 10x better without it! P.S. I consume such TH-cam videos at 1.5x or 2x and it's very annoying if there is background music :) Good luck... looking forward to more interesting stuffs
Please accept my sincere appreciation for this video! Best video about the DCC model ever!
Thank you very much! Glad you found it useful.
You are not a dirty quant. YOUR CONTENT IS CLEAN.
Haha. Thank you!
Thanks. The lightings, background, etc. Is really professional!
Thanks bud, appreciate it
thanks for the amazing video. do you have an idea how to calculate the composite likehood with pairwise method over all constituents, meaning over all neighboring paris of assets to estimate the two dcc parameters alpha and beta? i like to understand how to calculate on a big data set like you have done with the complete s&P 500. Pakel et al. (2020) - Fitting Vast Dimensional Time-Varying Covariance Models suggets to do that with pairwise likelihood. do have have an idea how to implement it in your code?
Hi, just wanted to say thank you so much and found this video v helpful when searching for 'conditional probability'. Happy accident that lead to a great channel for quant finance and physical science research. Cheers! 🥂
Music to my ears! That was the plan all along. Welcome onboard :-)
why computing the garch volatility you multiply that cond vola by 1600? in this part of code: 'garch_vol_df = pd.concat([pd.DataFrame(model_parameters[x].conditional_volatility/100)*1600 for x in model_parameters], axis=1)'
256 days in a year. Square root is 16. Multiplied by 100 to make 0.01 or 1% equal to 1.00 in the code.
16 x 100 = 1600
Thanks for checking!
Hey.. very helpful video.. can you help me the code of DCC-MIDAS-X Model which is somewhat similar to this only.. i am unable to incorporate this model.. please help
Thanks very helpful video. Could you please suggest how can exogenous variable X be incorporated into DCC garch model to see the impact of X on correlation of different assets
Glad you liked the video.
Does the exogenous variable have a correlation with GARCH characteristics?
You can’t just throw anything in there as it won’t converge.
Just add it as another asset and see what it looks like. No problem in doing that.
Remember we are looking at correlation of VOLATILITY, not returns.
@@dirtyquant Thanks for replying
X has garch characteristics i want to check whether variable X impact the dynamic conditional correlation of two assets..like if it has positive or negative effect
It won’t have any effect on the other 2 variables. But you can look at the time series of the DCC and I find that useful.
Let me know how you get on.
You can post your findings on dirtyquant.com
Thanks for the video! Just wanted to ask if in your opinion does make sense to use this to model implied volatilities, or might be better to use a multivariate OU process (with log(x)). Thanks!
Very nice video, but the background music is distracting.
Really clear explanation
Thank you
And good knowledge! Learned something new today.
Yeah, you don’t see DCC very often in the wild!
Thanks for the video. Just wanted to ask what the limitations of DCC-GARCH models are?
Sometimes the DCC effects aren't there....
It's not a magic pill
Hello. Last plot can't be executed. Something went wrong.
Hi! First of all many thanks for your video. You clarified a lot of stuff that is a necessity for my thesis. I do run into an issue.
I want to use the student's skew t distribution for my DCC-GARCH model. I have adjusted dist to 'skewt'. In the function garch_t_to_u. Also, i have added eta and lambda as parameters. Then, udata = t.cdf(std_rest, lambda). However, now opt.out.success provides a False and I do not have any estimations.
Could you please let me know how I can adjust the code to use the skewed T distribution? If I make mistakes in my adjustments as explained above please let me know.
Many thanks in advance!
Hi mate. Can you post this on dirtyquant.com and I will look at it later
you are using the conditional vol of the returns?
Yes, we are looking at the correlation of vol, not the correlation of returns
Is there a similar model for dynamic correlation of returns instead ov volatilty?
There is no reason why you have to use GARCH. If you omit that step and just feed in returns, it should work. I think the issue is that returns are a lot more unstable compared to vol.
Thanks alot. Very impressive. Can you help me in VARMA-CCC-GARCH?
Please con you do it with the DECO model
Hi Davide,
Never heard of DECO model. will look into it.
Thanks!
@@dirtyquant Why there is times 1600 in the volatility?
Sqrt 252 is approx 16, times 100 because the returns are tiny.
So annualised, and in a scale that is more relatable.
@@dirtyquant There are no tutorial on DECO ever... Would you like to help me? stats.stackexchange.com/questions/534509/compute-conditional-equicorrelation-matrix-deco-garch-from-dcc-garch-estimatio
I understand the basics, and could use this to interpret past data. But I am struggling conceptually with applying DCC to a simulation/forecast model. Do you have any recommended follow-up reading?
I am planning on doing a video on it, but look at my response here:
dirtyquant.com/t/video-cholesky-decomposition-take-your-backtesting-to-the-next-level/127/4
Thanks a lot for your videos! Is there a way to change your code to estimate the DCC considering the GARCH with a normal innovation density, instead of a t-distribution? I am really struggling to understand which parameters should I extract from the results of the GARCH to then get the uniform distribution.
Very easy.
Instead of udata = t.cdf(std_res, nu)
Just use the normal CDF
And instead of arch_model(rets[x], dist = 't')
Just change that to normal.
Done!
@@dirtyquant thanks a lot for your kind reply, you are really making my last Master assignment a lot easier! Could you also tell which parameters should I put in the norm cdf? I mean how do I get the mean and std from the fitted GARCH model.
(Amazing channel really)
@@chiarapalma8276 ha. Glad I can help. Well, the data going in there is standardised, so it’s already 0 means and variance of 1.
@@dirtyquant I see! So I just need to get the standardized residuals out of it, right? Any idea on how to do it? I tried to get the same object (std_res) from the fitted garch model with normal, but the object does not exist.
Because everything is in a function, and hence a local variable. It won’t appear in your list of variables. I am travelling but will give you a working example later. Try solve it yourself, you will learn a lot
Thanks. It's really helpful. Can you please share that how to calculate value at risk using DCC GARCH on python.
Hey, yes, I have been planning to make something like that exactly, but life got in the way. Soon I promise :-)
Thanks...Looking forward.
You bet!
On loglike_norm_dcc_copula function, why do you subtract the inverse matrix by np.eye(N) ?
Hey Rocky, if you look at page 135 (106 on top right) of this doc you will see the log like function for normal copula: ses.library.usyd.edu.au/bitstream/handle/2123/14728/2016_Christian_Contino_Thesis.pdf?sequence=2&isAllowed=y
@@dirtyquant Thank you for sharing your work. Very appreciate it. I do have a question. What if I would like to change from Multivariate Gaussian distributed errors to Multivariate t-distributed errors, how do I change the code?
Thanks. Very helpful! I learn a lot from your code. I want to know about how to compute standard errors to make inferences about estimates in DCC. Could you add how to compute standard errors to your python code, please?
Hey, glad you liked the video and code.
So the errors around the estimates really depend on your optimiser, which will give you estimates around the parameters. If you use MCMC, then you can just look at the chain and infer the errors from there.
Third time watching this vid, still learning. How do we simulate this?
I will do a video at some point, but look at this discussion, as I link to my paper on how to do it:
dirtyquant.com/t/video-cholesky-decomposition-take-your-backtesting-to-the-next-level/127/3
Hi, I really enjoy the Video! Do you have a citable literature recommendation for DCC GARCH for me?
hdl.handle.net/2123/14728 Page 89 of the PDF onwards
@@dirtyquant Thank you, that helps a lot!
Hi , it's a really helpful video. Really appreciated :))
I have tried to walked through your code. Just have a question. If I wanna get the DCC correlation of the stocks do I just extract the Rt value?
Thanks in advance.😁
Hi, yes, but that will give you the correlation of volatility, not of the returns, as it’s way more stable.
@@dirtyquant Hi Dirtyquant, Thanks so much for getting beck so quickly! Is there any way that I can modify the code to get the return correlation. I tried input return instead of udata_list but the model shows not success.
The model is (likely) not going to converge if you use returns as they as far too unstable. You need to convert the returns to a uniform distribution using the appropriate CDF for that distribution.
Again, I don’t think it will work as the autocorrelation of returns decays far too quickly so it will be incredibly noisy.
Please try it and report back. :-)
Thank you! I’ll try it out 🥹
thanks! very clear
Thank you. Glad it all made sense to you
Good but could you walk through the code. That's where the real learning /value to be gained is.
Your wish is my command. On another video :-)
You have access to the GitHub page in the description as well
It was really a little complicated to me to calculate DCC and CCC (Constant Conditional Correlation) in Excel. I had to use a LGARCHMLE (leveraged model) with American Indexes: Nasdaq, S&P500 and Dow Jones. However, it's gonna be very difficult to calculate it in Python. I heard that R programming is very easy to estimate it. I didn't know that DCC can be calculated in Python too. What about CCC? Your code looks complicated. Can you share it? Regards.
Link in the description to my GitHub with all the code
Hello Master Quant, great video. Please add the code for the predicted next day DCC Volatility.
Easy! Let me get to it
Hi, it's me again lol. Do you have any related videos that's calculating DCC forecast model?
Thank you!
Hi mate. DCC automatically does 1 step ahead forecast
@@dirtyquant Hi, I'm quite curious how to modify the code if I want 30 days ahead forecast. Do I just change the looping through Rt?
Very helpful! Thank you so much!
Welcome mate!
I've managed to get through to %time opt_out = minimize(loglike_norm_dcc_copula, np.array([0.01, 0.95]), args = (udata_list,), bounds=bnds, constraints=cons), however, I keep on getting --> ValueError: not enough values to unpack (expected 2, got 1). Need some help!!
what does udata_list look like?
Mine looks like:
[array([0.5747693 , 0.91835073, 0.25096253, ..., 0.43679173, 0.45847006,
0.60708795]),
array([0.10530546, 0.9973108 , 0.6342007 , ..., 0.88912801, 0.21804338,
0.88675833]),
array([0.48491891, 0.87296667, 0.37504468, ..., 0.46726709, 0.72021928,
0.20255693]),
array([0.22081462, 0.07066372, 0.33594198, ..., 0.28937289, 0.6635698 ,
0.9696474 ]),
array([0.24902229, 0.16361277, 0.5294778 , ..., 0.61282812, 0.45911533,
0.52582611])]
bnds should be:
((0, 0.5), (0, 0.9997))
cons:
{'type': 'ineq', 'fun': }
Have you upgraded yfinance to the latest version, 0.1.63?
pip uninstall yfinance
pip install yfinance --upgrade --no-cache-dir
Are you using your own data or the yahoo finance data from the notebook?
how would you name ret in this case if it's data from my own excel?
Did you transform the data using the garch function?
It shouldn’t matter, as long as the data is in a list of arrays like my udata_list
@@dirtyquant no I haven't done that yet..my udata_list is empty. My data is just a record of returns.
Ok, does your rets looks like a dataframe with dates as an index and stock name as the columns?
How does one profit from this?
Pairs trading
Interesting video... thanks for sharing... loved it although not a fan of the background music.. would be 10x better without it!
P.S. I consume such TH-cam videos at 1.5x or 2x and it's very annoying if there is background music :)
Good luck... looking forward to more interesting stuffs
Thanks for watching. Glad you are enjoying the videos.
Straight Outta Pandas 🤣