Sir, thank you so much! I am currently doing my thesis for my MBA program and this helps my confusion that I've had for 2 weeks now. Thank you for the knowledge, Sir!
Hi Dr Bilal ... Thank you so much for your video. This helps me in my final dissertation. This is the last step of my data analysis. May god bless you more and more in future.
Introducing dummies for the time and cross-sectional dimensions in GLS regression is particularly relevant when handling heteroscedasticity, contemporaneous correlation, and Unobserved Effects in panel data analysis.
Hello, i have a question. When to use corr(ar1) and corr(psar1). and what does Wald chi indicate? when the value of Wald chi is considered to be high ? thank you!
corr(ar1): checks for first-order autocorrelation (relationship between a value and its immediate previous value). corr(psar1): checks for partial autocorrelation, (relationship between a value and its previous value, while controlling for the effects of intermediate values). In simple terms, Wald Chi-Square tell you if a variable is important in explaining the behavior of your data. A high value means the variable is important, while a low value means it's not as important. A high Wald Chi-Square value (e.g., > 10) indicates that the variable(s) have a significant effect on the model. while a low Wald Chi-Square value (e.g., < 3) indicates that the variable(s) do not have a significant effect on the model.
Hello, first of all thank you so much for the video. Anyway, from the output of the xtgls, there is no R-square information. In that case, where can we know the R-square then? Thank you in advance!
When you estimate a model using the xtgls command in Stata, which is a generalized least squares (GLS) method, the R-squared statistic is not calculated by default. . This is because the total sum of squares cannot be broken down in the same way as in ordinary least squares (OLS) regression, making the R-squared statistic less useful as a diagnostic tool for GLS regressions . However, you can still obtain information about the goodness of fit of your model. One approach is to calculate the correlation between the observed values of the dependent variable and the fitted values of the dependent variable . This correlation can give you an idea of how well your model explains the variability of the dependent variable. Another approach is to use the pseudo r2 command in Stata, which computes a pseudo R-squared value for GLS regression models . This pseudo R-squared value can be used as an alternative to the traditional R-squared statistic when working with GLS models. In summary, when working with xtgls in Stata, you can still assess the goodness of fit of your model by calculating the correlation between observed and fitted values or by using the pseudo r2 command.
sir, thank you for the explain, regarding the video, I calculated my data and having problem, my data only has autocorelation issue (FE Model), what command i should use? because i read some of reference command but didnt know whats stand for each of command, thank you
Let's assume that y is the dependent variable and x1 and x2 are independent variables: Since you haven't found heteroskedaticity in your data, instead of the following: xtgls y x1 x2, panels(heterosk) corr(psar1) nolog force Use this one xtgls y x1 x2, corr(psar1) nolog force It will be even better if you compare the results of both of these commands to see if there is any difference between the two results.
Hello, the results are correctly interpreted, however, the mistake is a typo, "Statistics show that null hypothesis is not rejected (p-value > 0.05)". It should be, "Statistics show that null hypothesis is rejected (p-value < 0.05)". Thanks for correction.
Sir can we change the independent variables to simulate its effect on the dependent variable such as a 10 percent change in one variable, how much it affects the dv. Is there such a command available as per your knowledge in FGLS? Thank you
Yes, you can use the "marginal effects" command in Stata. 1. Estimate the FGLS model: `fgls y x1 x2 x3, fml( formula )` 2. Calculate the marginal effects: `margins, dydx(*)` This will give you the marginal effects of each independent variable on the dependent variable. To simulate a 10 percent change in one of the independent variables, you can use the `at()` option: `margins, dydx(x1) at(x1=(1.10*x1))` This will calculate the marginal effect of a 10 percent increase in `x1` on the dependent variable `y`.
panels(heterosk) is used when heteroskedasticity is not considered. corr(psar1) is used when auto-correlation is not considered. Its better to run tests for heteroskedasticity and auto-correlation before running xtgls. These are Breusch and Pagan Lagrangian multiplier test Modified Wald test for groupwise heteroskedasticity Wooldridge test for autocorrelation
@@drbilalmehmood206 I did these tests, yes my data has both problems of autocorrelation and heteroskedasticity, so you are suggesting to not use panels "heterosk" and corr"psar1" ? thanks for your concerns, cheers.
Please continue producing more videos. You have the blessing of simplicity and accuracy. Thank you for your amazing video.
Yes, with your support and prayers.
Keep sharing!
Sir, thank you so much! I am currently doing my thesis for my MBA program and this helps my confusion that I've had for 2 weeks now. Thank you for the knowledge, Sir!
Glad it helped!
Hi Dr Bilal ... Thank you so much for your video. This helps me in my final dissertation. This is the last step of my data analysis. May god bless you more and more in future.
your welcome.
thanks.
keep sharing.
much more is to come.
Aoa sir,I'm a lucky person,that you were my supervisor in Bs(Hons).Really a nice personality.
wasalam, It's a blessing to be helpful and remembered.
Wasalam
Welcome. Stay blessed
Very helpful video. Thank you . When should we use GLS with year, industry dummies?
Introducing dummies for the time and cross-sectional dimensions in GLS regression is particularly relevant when handling heteroscedasticity, contemporaneous correlation, and Unobserved Effects in panel data analysis.
Hello, i have a question. When to use corr(ar1) and corr(psar1). and what does Wald chi indicate? when the value of Wald chi is considered to be high ? thank you!
corr(ar1): checks for first-order autocorrelation (relationship between a value and its immediate previous value).
corr(psar1): checks for partial autocorrelation, (relationship between a value and its previous value, while controlling for the effects of intermediate values).
In simple terms, Wald Chi-Square tell you if a variable is important in explaining the behavior of your data.
A high value means the variable is important, while a low value means it's not as important.
A high Wald Chi-Square value (e.g., > 10) indicates that the variable(s) have a significant effect on the model. while a low Wald Chi-Square value (e.g., < 3) indicates that the variable(s) do not have a significant effect on the model.
Hello, first of all thank you so much for the video. Anyway, from the output of the xtgls, there is no R-square information. In that case, where can we know the R-square then? Thank you in advance!
When you estimate a model using the xtgls command in Stata, which is a generalized least squares (GLS) method, the R-squared statistic is not calculated by default.
. This is because the total sum of squares cannot be broken down in the same way as in ordinary least squares (OLS) regression, making the R-squared statistic less useful as a diagnostic tool for GLS regressions
. However, you can still obtain information about the goodness of fit of your model. One approach is to calculate the correlation between the observed values of the dependent variable and the fitted values of the dependent variable
. This correlation can give you an idea of how well your model explains the variability of the dependent variable. Another approach is to use the pseudo r2 command in Stata, which computes a pseudo R-squared value for GLS regression models
. This pseudo R-squared value can be used as an alternative to the traditional R-squared statistic when working with GLS models. In summary, when working with xtgls in Stata, you can still assess the goodness of fit of your model by calculating the correlation between observed and fitted values or by using the pseudo r2 command.
sir, thank you for the explain, regarding the video, I calculated my data and having problem, my data only has autocorelation issue (FE Model), what command i should use? because i read some of reference command but didnt know whats stand for each of command, thank you
Let's assume that y is the dependent variable and x1 and x2 are independent variables:
Since you haven't found heteroskedaticity in your data, instead of the following:
xtgls y x1 x2, panels(heterosk) corr(psar1) nolog force
Use this one
xtgls y x1 x2, corr(psar1) nolog force
It will be even better if you compare the results of both of these commands to see if there is any difference between the two results.
Hello Dr. why in table 4.5 for Wooldridge test for serial correlation it mentions Ho can't be rejected while value is 0.0318
Hello, the results are correctly interpreted, however, the mistake is a typo, "Statistics show that null hypothesis is not rejected (p-value > 0.05)". It should be, "Statistics show that null hypothesis is rejected (p-value < 0.05)".
Thanks for correction.
Sir can we change the independent variables to simulate its effect on the dependent variable such as a 10 percent change in one variable, how much it affects the dv. Is there such a command available as per your knowledge in FGLS? Thank you
Yes, you can use the "marginal effects" command in Stata.
1. Estimate the FGLS model: `fgls y x1 x2 x3, fml( formula )`
2. Calculate the marginal effects: `margins, dydx(*)`
This will give you the marginal effects of each independent variable on the dependent variable. To simulate a 10 percent change in one of the independent variables, you can use the `at()` option:
`margins, dydx(x1) at(x1=(1.10*x1))`
This will calculate the marginal effect of a 10 percent increase in `x1` on the dependent variable `y`.
@@drbilalmehmood206 ty sir. I'll try it.
Hi sir. Do you mean when we regress the xtgls we use original data which is no log?
taking logarithm depends upon the degree of variance of the given variable and is suitable when we need to linearize the variable.
Thanks Dr.
If I use log variable, so I dropped the command no log.
'no log' option in XTGLS command hides the log file of stata and it has nothing to do with logarithm.
Thanks Dr for the info
Hi Dr, Bilal, when we should add: "panels(heterosk) corr(psar1) nolog force" and when not to add them to the GLS models. Thanks.
panels(heterosk) is used when heteroskedasticity is not considered.
corr(psar1) is used when auto-correlation is not considered.
Its better to run tests for heteroskedasticity and auto-correlation before running xtgls. These are
Breusch and Pagan Lagrangian multiplier test
Modified Wald test for groupwise heteroskedasticity
Wooldridge test for autocorrelation
@@drbilalmehmood206 I did these tests, yes my data has both problems of autocorrelation and heteroskedasticity, so you are suggesting to not use panels "heterosk" and corr"psar1" ? thanks for your concerns, cheers.
You should use these tests, as your data has these issues.
Use "heterosk" and corr"psar1" in xtgls command.
@@drbilalmehmood206 thank you, I really appreciate your efforts, good job brother.
Hello sir,
Xtgls command is used when t is more than N. but in my
Case N is more than T. Please help. Which test can be used in that case.
Hello,
Your data seems to be a Micro Panel. GMM can be good option in this case.