Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the TH-cam channel where we post a new video almost three times per week: th-cam.com/channels/iujxblFduQz8V4xHjMzyzQ.html Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074 And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en Check it out! Thanks!
I usually do at least the following three: 1. CMIN/DF: should be between 1-3 (this is a measure of absolute fit) 2. CFI: should be greater than 0.95 (this is a measure of relative fit) 3. RMSEA: should be less than 0.6 or so (this is a parsimony adjusted measure of fit)
May God (or whoever or whatever) bless you, Dr. Gaskin! You cou can spend weeks and months paying for courses that don't do anything except raise more questions & insecurities, and then comes along such a selfless individual who easily explains everything in a 10minute TH-cam video. I got teary-eyed when I realized I was finally understanding something. Instant subscribe.
Hi Dr. Gaskin! You are helping many people. Do you know that?. After watching your tutorial, I can solve my problem. Thank you so much!!. Now I am preparing for final defense in this month. Thanks again!!. Wish you all the best!!. Mike (from Taiwan)!
Hello Mr. James! Thank you for this video. After learning amos dor 2 months, I can solve my problem related to Badness of fit of my mediation model. God bless!!!
Sir, from the day i began my tool standardization i had sincerely learned using AMOS through your videos. I post this here as a gratitude, since this was the first video of your i watched. And today, when i have doubts on CFA or SEM, I jus type your name on youtube. Thankyou so much.
Dear James Gaskin, Thanks for the wonderful online tutorial. I have a proposed model that includes 10 independent, 1 moderator and 2 dependent variables. While testing its “model fit” for the first time, I got GFI = 0.888 and the rest of the measures were just perfect. Then I performed following steps to improve GFI: 1) Co-varied the error terms of the same variables as per the modification indices. Then I got: GFI=0.890, P-value = 0.123, CMIN/DF = 1.051, AGFI=0.871, CFI=0.991, RMSEA=0.012, PClose=1.0 2) Used your stats tool package “Fit check”. It suggested dropping an item. I dropped that item and got: GFI=0.892, P-value = 0.158, CMIN/DF = 1.045, AGFI=0.873, CFI=0.992, RMSEA=0.011, PClose=1.0 3) Again used your stats tool package “Fit check”. It suggested dropping one more item. I dropped that item and got: GFI=0.896, P-value = 0.236, CMIN/DF = 1.033, AGFI=0.877, CFI=0.994, RMSEA=0.010, PClose=1.0 As you can see, all the values were perfect except GFI. There was no significant improvement in the value of GFI even on performing these three steps. Kindly suggest how the value of GFI can be improved in such cases.
I usually have trouble with GFI when I have a lot of variables and/or when I have a lot fo sample size. These two things inflate the chi-square, and GFI is not robust to them. I would attribute it to model complexity in your case (not knowing the sample size).
James Gaskin Thanks a lot for the reply Dr. Gaskin. Sharma et al. (2005), among many others, articulate that in, context of SEM using AMOS, given the issues with GFI, it has lost its popularity in recent years and therefore its use should be discouraged. Despite this, recent research papers which use AMOS do consider GFI as a measure of model fit. What's your views regarding use of GFI? Sharma, S., Mukherjee, S., Kumar, A., and Dillon, W.R. (2005), "A simulation study to investigate the use of cutoff values for assessing model fit in covariance structure models," Journal of Business Research, 58 (1), 935-43.
Hello .. Thanks a lot for your prompt replies. I agree with you regarding three factors of Burnout (Third one is LACK OF PERSONAL ACCOMPLISHMENT) but my pilot surveys revealed that Lack of Personal Accomplishment does not effect employee burnout, so I have ignored it for the time being. I shall browse your formative construct and shall reconsider it .. Really greatful.
This is an acceptable approach. My recommendation is to use either a random sample of your data for each, or to use the same data that you will be using for the structural model. Most people do not have the luxury of an abundance of data, so they simply use everything they have for both EFA and CFA (which is what I have done here).
Dr. Gaskin, just wanted to thank you for putting together such a great walk-through. I've had CFA / SEM covered in a few classes, but only conceptually. I've used your walk-through with some of the data from my dissertation, and it all makes so much more sense (and, I can actually do the analysis now). Thanks for sharing your knowledge!
@Gaskination Thank you so much. I am going to share your tutorials with students at Information Systems department at Georgia State University. Really appreciate the work Sir.
1. Sample size inflates the chi-square, but also inflates the CFI and RMSEA. 2. You could do that. Usually when we have more than 500 sample size, we split it for the EFA and CFA randomly.
Literature says the ideal is four items. Logic says the optimal 'minimum' is three (for stability's sake). Practice says that you can sometimes get away with only two (but this often leads to instability - e.g., standardized loadings greater than 1.00).
We can covary the error terms on the same factor; this just means the items are systematically correlated (probably due to wording or position in a survey). Covarying across factors implies the same thing, so we must be justified in covarying those (look at the wording of the items and see if they are too similar).
1. No need to transform. The likert values are fine. 2. These are called Heywood cases. You might try covarying the negative error term with another error term in that factor. If that doesn't work, then you might need to constrain the indicator weights to be equal (set their regression weight parameter to all equal "a" (without quotes)). If you do this for more than one factor, make sure the next one is "b", then "c", etc. Hope this helps.
0.700 or higher is the generally agreed upon "good" factor loading, but really it depends on your sample size and measurement error. Hair et al "multivariate data analysis" has a table that discusses the minimum threshold based on sample size.
go to the object properties of the error term (right click or double click the error term). Then go to the parameters tab. then type 0.05 or something similarly small in the Variance box.
@farispt The theoretical basis is that they are reflective and interchangeable items, which means that they were probably worded very similarly, which means that they probably have a systematically related error (rather than a causal one). So, yes, you can covary the error terms as long as they are within the same factor.
The only thing I can think of is to try to find outliers. This doesn't just mean extreme values, it also means unengaged and erroneous responses. For example, if someone responds with all 3s or with 1, 2, 3, 4, 5, 1, 2, 3, etc... or if they respond to a reverse coded question the same way they respond to regular questions.
I would do it together. The main point of the CFA is to determine if you have distinct factors for measuring your constructs. In order to avoid tautological constructs, I would recommend putting them all together.
1. Hair et al 2010 "Multivariate Data Analysis" Table 12-4, p. 654 2. Wow! that is a lot! That's great! Just realize you might struggle achieving good fit, but this is not unexpected when you have such a large sample size.
1. Either use the default AMOS sets (first item for each factor) or place the constraint of 1 on the item that loaded the strongest in the EFA. 2. Covarying the error terms accounts for additional covariance represented in the covariance matrix. Model fit is the extent to which the proposed model (your CFA) accounts for the covariances in the data. So, naturally this will improve fit.
This is an estimation problem due to items having really high correlation/communalities. To fix this, you might want to try using maximum likelihood (as opposed to principal axis factoring, or principal components analysis). You might also attempt to constrain the number of factors to a few alternative models (like try six instead of five). You might also see if there are items that are nearly identical in their wording, and eliminate one of them. Hope these suggestions help.
@vyeniaras This tutorial is meant to be a mechanical demonstration. So, trimming is really more subjective than I make it out to be. To meet the criterion for convergent validity (AVE>0.50, CR>0.70) then loadings on a single factor should at least average out to > 0.70. However, if you are working with established measures, and you're not worried about the validities, you can probably accept loadings as low as 0.30 (several references for this). Accepting low loadings may cause other problems.
These depend on many things. You want to decrease Chi-square while increasing degrees of freedom. The fastest way to do this is to address the modification indices. Also, you may have to identify items that are part of separate constructs but that are highly correlated. If possible, remove one of the two items in order to reduce cross loading.
Handling univariate outliers in CFA is same as EFA. Yes, you can perform CFA using same data as EFA, although if you have a lot of data, you can also randomly sample a portion for EFA and then use the other part to do CFA. This is considered more rigorous.
1. If they overlap, do one at a time. 2. the p-value in this case needs to be greater than 0.05. I know that sounds weird, but the null hypothesis is that it is not good fit, so if you have a significant p-value, then that means you have not good fit. However, when you have large sample sizes, or complex models, it is very difficult to achieve a non-significant p-value for the cmin.
I don't know if you need a reference for it. It is simply mathematical. If you increase the sample size, the chi-square increases. If the chi-square increases, the p-value decreases. So citing this would be like citing a more complex version of "1+1=2" :)
Hi James. The procedure that you used: dealing with covariances' errors and covary errors by looking at the M.I.... can be explained saying as followed??: "you used the expected parameter change (EPC) in combination with the modification index (MI) and the power of the MI test to detect model misspecifications" or it's not correct?? Thanks in advanced, best regards
Yes, that is a correct statement. However, be aware that this practice of correlating errors is widely considered bad practice. Here is a bit of discussion on it: statwiki.kolobkreations.com/index.php?title=Citing_Claims#Covarying_Error_Terms_in_a_Measurement_Model
@AtyDeh It definitely works. I'm sorry you are not able to watch it. There may be filter issues at your location. If you are trying to access it from work, then they probably block youtube. If you cannot access it from home, then you probably just need to try again another time. In the meantime, feel free to refer to my wiki for info on model fit: statwiki. kolobkreations. com
@ATarhini I don't know if there is a right or wrong answer to this. I would include in the CFA all latent variables that I intended on using in my model. This would establish that they are distinct constructs. Moderators, in particular, should not be strongly correlated with the other variables in the model, so I would include them just to make sure they meet this criteria.
No, because 2nd order factors are not represented in the EFA. However, you can modify the CFA to accommodate a 2nd order factor after using the plugin.
@ATarhini It depends on if it is adding any value. If it is truly contributing to the factor, then keep it, if it is not, then drop it. covarying the error terms (which is what I assume you are talking about) is just one way to keep it without causing issues.
1. You can make a second order factor (I have a video about this), or you can just put all the items into a single factor. 2. This depends on what scales you are using. If you are using your own scales, then fine. But if you are using existing scales from another published paper, then you need good justification for removing those items. Also, you should make sure you do not compromise content validity (meaning of construct). Also, it is best to have 3 or more items per factor.
1. loading > 1 is okay if unstandardized, but not if standardized. If standardized and >1, then you might have a negative error variance that needs to be constrained to a small positive number. 2. That is the right order, although I almost never look at standardized residual covariances unless I simply cannot achieve good fit any other way.
either: 1. you are looking at UN-standardized values (instead, look at standardized) or 2. you have factors with only two indicators. This is unstable. You may need to fix the two indicators to have the same regression weight. You can do this by naming each of their regression weight parameters "a". (see the object properties for the line to assign these values)
@stataguy The detailed answer is yes, you can covary any errors if there is a good reason for systematic correlation of residuals. However, if the correlation is due to a causal relationship (rather than similar wording - thus systematically related), then you should not covary them. In the video I try to keep it simple. Hope this helps. I can't think of a reference off the top of my head.
+Sabin Khadka, sorry for the delay in responding. TH-cam has decided to stop notifying content creators when users post comments... In AMOS you can do it: th-cam.com/video/dsOS9tQjxW8/w-d-xo.html
smellofstrings: The reply function on TH-cam video pages is not working currently, and TH-cam is trying to fix it. So, until then, I will reply to your questions directly on your Channel instead of here.
This means that you have a variable in your dataset called e1, but you are also trying to name a residual (error) e1. I recommend renaming all e1, e2, etc. to something else since e# is default for error terms.
It only translates the pattern matrix into a measurement model. If you want to tweak things afterwards to include a 2nd order factor, that is fine. But it can't detect that.
Dear @james gaskin, the video is really helpful! I have a short question: When I select for "modification indices" in my AMOS model and try to calculate it, I get the following error notification "modification indices cannot be calculated with incomplete data" - do you have a tip for me on how to overcome this and still be able to calculate the modification indices? I did tick the box at "estimate means and intercepts". Would be great to hear from you :) Thanks already!!
If you have missing data, then you cannot calculate modification indices. So, either you must impute that data in SPSS/Excel (and then uncheck 'estimate means and intercepts'), or you will need to work without modification indices.
Emanuele Fino If they still demonstrate sufficient discriminant validity, then just move forward. Otherwise, you might want to consider making them a 2nd order factor (reflective). This will probably turn out better results than a single factor with all items.
James Gaskin if the data not good in all criteria lika cmin=10, Rmsea>0.08, the IFI, TLI,CFI >0.9 and AGFI, GFI >0.9, my result not good in all criteria. How could i do?
You can try this. The argument against it is that you need to definitively establish evidence that your IVs are not the same as your DVs (i.e., that there is sufficient discriminant validity) or else you are running up against an issue of tautological correlations (e.g., age predicts experience).
Yes, they will be different. We would test measurement hypotheses (which are uncommon) by examining convergent and discriminant validity and reliability, as well as model fit. We test structural hypotheses (much more common) by developing a causal model and examining the regression weights. Hope this helps. I have videos about this stuff too.
@vyeniaras Additionally, you may not want to let go of a certain item because it is crucial to your construct (however, this should not be an issue when using interchangeable items, as should be done for reflective constructs).
Each extraction method and rotation method serves different purposes. Principle components and varimax are the most lenient and easiest for obtaining a solution, but they also are prone to be too soft. I actually prefer Maximum Likelihood and Promax. Maximum likelihood is strict. Promax assumes correlated factors.
Hi James, is it acceptable when the TLI and NFI are close to 0.9, for example, 0.87? All values in my analysis meet the goodness of fit but the TLI and NFI are not greater than 0.9. Should I remove some more items so that it will be greater than 0.9? Or it is still acceptable if the values are close to 0.9? Thanks again James
I never report TLI and NFI (I usually just use CFI, RMSEA/PClose, and SRMR). Model fit has many measures. If there is enough evidence that model fit is good, then a couple metrics suggesting it is borderline is probably not a problem.
Thank you so much for taking the time to post this! You may have literally saved my manuscript! I don't know why they don't teach us this in grad school or at least sell a good book on it.
+surfergirl0519 without any offense to the great work James has done by uploading this video, there is a book by Hair et al. named Multivariate Data Analysis. It is fantastic!
Hi James, I am still a bit confused about Modification index. If the Model fit is good and there are some MIs having large values, should I just leave the MI or should I covary the errors? Thank you
This may also be due to sample size. Large sample sizes artificially inflate the Chi-square. So, lower thresholds are acceptable for higher sample sizes. A "large" sample size is greater than 250. This is according to Hair et al 2010 "multivariate data analysis" book.
You just have to "estimate means and intercepts" in the Amos estimation tab. This will account for blank values. If you have replaced your blanks with a number, like "-99", Amos won't understand that. You will have to delete those values.
Hi James- Thanks for all your helpful comments. If I want to compute factor scores for each individuals participants/factors from the CFA model, how would I do it? While doing EFA, SPSS would allow you to save the factors scores (using regression, Bartlett and Anderson-Rubin) but AMOS won't. I found a blog where it suggest to standardize the original scores and muliply with factor loading scores and sum them up. Is this a correct approach? If yes, do you know of any literature that I can cite for this approach. TIA
oh thanks for your time to reply my question, i figure it by myself, that there are some empty section there, the respondents doesnt fill it properly. ^^ thanks now i can finish my thesis, this video help me a lot
1. not recommended, but justified if the items have very similar wording or intent. 2. not unless the regression weight is very small. Look at the p-value. If it is significant, you can keep it. 3. I would not worry about standardized residual covariances unless absolutely necessary. and then, just do them one at at time starting with the highest (absolute value). My excel tool actually has a tab for identifying the most problematic item using the standardized residual covariances.
These three can all do it. There are others, but these are the most popular. Stay tuned for a cool new video about how to automatically produce a CFA model in AMOS using the pattern matrix from the EFA. Hopefully I'll create this video this week.
Likely this is because you have a factor with only two indicators. Is this the case? If so, does one of them have a standardized regression weight greater than 1.00? If so, you may need to constrain that regression weight to be equal to the regression weight of the other indicator by typing "a" (without quotes) in the regression weight box for that line, and then constraining the variance of the latent factor to equal 1. Hope this helps!
If you are aiming to confirm your factor structure, then I would run a bootstrap to determine the significance of the path weights/loadings for each item. However, if you ran a successful EFA, then you should have no problem conducting a CFA in AMOS using reflective factors.Then you could also assess model fit.
Rely on the pattern matrix. You might also try the Maximum Likelihood approach (instead of Principle Components Analysis or Principle Axis Factoring) because this is the algorithm that AMOS uses during the CFA.
Those decisions are driven by logic and theory. You should end up with as many factors as you intended during your theory development. Formative vs. reflective is determined by the relationship of the items within a factor (see Jarvis et al 2003 on specification). Second order is something we sometimes do when we have broader scope constructs (often formative - see Straub et al in MISQ I believe, not sure what year). Hope this helps.
Sounds like you already have sources to support your model fit. You don't need my additional confirmation. For thresholds I've put together, refer to my wiki: statwiki. kolobkreations. com
My apologies, for some reason I thought you were in the EFA stage. During the CFA, if you have this problem, it is usually because you only have two items on a factor (if that is the case, then try to constrain those two items regression weights to the same thing, like "fixed"). If that is not the case, then you might look at the loadings to see if one loading is in the opposite direction of the others. If so, then either constrain it, or remove it. I hope this helps.
As long as you have a minimum of three items on each factor, this is fine. Also, it depends a bit on if the scales come from published literature. If so, you may want to just check to see if others have also had problems with those items.
It is somewhat subjective. You want to keep all you can and still have good validities. The numeric cuttoff would be an average loading of 0.70. But that means you could have one of 0.90 and another of 0.50. You just want them to average out above 0.70.
Hi, Dr.Gaskin. First of all thank you for your informative videos, they are very helpful. I am trying to do model fit on AMOS. I have 6 factors(It's Holland's RIASEC theory). Using SPSS, I calculated the cronbach alpha for each factor and they were all above .80. So i thought that in the CFA, the fit indices should also be high. But they were not: The TLI=.592, CFI=.612 and RMSEA=.068. I saw the suggestion on modification indices, however i am not sure how and why they works.I watched your videos, but i did not see one when you go back and modify the model. If you help, i will be extremely grateful. Thank you.
The modification indices identify correlations in the data that you have not accounted for. If you account for the large ones, they will improve your model fit (by reducing the chi-square).
It is because you are estimating means and intercepts (which you are probably doing because you have missing data). So, you'll need to impute the missing data in SPSS first, then save it and re-link it in AMOS, then run it again with the "estimate means and intercepts" box unchecked.
@ecmlau I don't understand what you mean by reducing the model, unless you mean to trim off the items that are not correlating very well with others. I don't have links for that, other than this video. But the general rule is that you want the average standardized loadings from items within a latent factor to be higher than the correlations between factors.
Hi Mr Gaskin! Thank you for all the lessons you provide through online media. This time I want to ask about how to install estimands on Amos version 23, especially for the specific indirect effect of Amos estimands VB. After I installed and followed the steps given in your tutorial, the estimates didn't appear in the plugins menu. Is there a problem with that? please guide
Estimands do not appear in the plugins menu. You have to access them by clicking on the bottom left where it says "not estimating any user-defined estimands"
@@Gaskination thank you Mr Gaskin, After I tried again according to the guidelines and suggestions from you, the results are available. The output is also available once applied. What you have developed is very helpful in the research process. I hope you continue to work. Thank you and always healthy
honestly, those values do not sound so bad. Maridia's cr is a bit strict. You can try the transformation to see if it helps. I cannot remember which transformation for which issue, and I don't have my books handy to investigate. If you look at Hair et al 2010 "Multivariate Data Analysis", that book has a chapter or section on which transformation to use and when.
Yes. You don't have a variable in your dataset called "nature of work". So, it can't be included as an observed variable (a rectangle). hope this helps.
There must either be some missing data, or you have blank rows at the bottom of your dataset. Otherwise you could uncheck estimate means and intercepts. I would not remove items with loadings greater than 0.600. The loadings just need to average out above 0.700 for each factor.
RMSEA is a parsimony-adjusted measure, so it can be improved by making a parsimonious model. This is not always doable in the measurement model. To do it in the structural model, you can remove paths that are not significant (and make sure you don't exclude paths that would be significant - as indicated by the modification indices). Modification indices will not tell you what to exclude, only what to include.
If you have good fit, then you can move forward. Don't worry about the high residuals. I almost never deal with high residuals unless it is a last resort.
I have also this massages, 1- An error occurred while checking for missing data in the group, group 1. 2-An error occurred while attempting to fit the satarted 3-Model for group, group number 1. it will not be possible to compute the Chi squared statistic. 4-An error occurred while attempting to fit the independence model for group, group number 1. it will not be possible to compute fit measures that require a model
It is more likely the high correlation. Try to constrain the regression weights of the paths between the first and second order factor to be equal (name them all "fixed" or "Jojo" or something - but name them all the same thing). This is a band aid. To really fix it, examine standard errors to see where the real problem is coming from. I have a video about this: "Iteration Limit Reached in AMOS" Hope this helps.
I'm not sure I understand the question. If you want to see how few items you can use to still result in a reliable latent construct, then you could use the new video I created to show how to improve reliability. But I'm not sure that was your question.
Ive some hiccups running a CFA on a validated scale. 1) I get a not positive definite covariance matrix on a CFA with 4 factors (each having 3-6 indicators, for a total of 21). I believe its because of unusually high correlations between all indicators, across factors). 2) I had 200+ df, so I correlated all the error terms for those items with significant correlations. Now I get an unidentified model. AMOS points to one factor (and its three items) as the "probably unidentified parameters"
The most likely reason is because the model didn't fully minimize or converge. The iteration limit was reached or something like that. Check out my video on reaching the iteration limit. Hopefully that will help.
hi Sabin Khadka , TH-cam wouldn't let me respond directly, so I'll respond here. You can simply constrain the error variance to a small positive number (like 0.05). This might fix it. If not, it is indicative or other underlying issues. You might remove an error covariance if you have one. Or you might identify if two items are too similar.
The key point is that "Minimization was unsuccessful". This means that AMOS couldn't find a good solution. I highly recommend watching my video called, "iteration limit reached in AMOS". If you still have trouble with it after applying what you learn in that video, feel free to email me. My email is listed on the channel home page and on my wiki. Hope this helps!
I have always had that box checked. Actually under Analysis Properties I check Minimization history, Standardised estimates, Residua Moments, and Modification indicies.
Great video James, very useful. I've a question. I know Amos does not provide the Satorra-Bentler Chi square for non-normal data, but instead offers bootstraping. Can you make a similar video highlighting bootstrapping in CFA with Amos?
This means that you do not have variables in the dataset you linked to amos that match the names you used in the boxes. One way to avoid this is to simply drag the variable names into the boxes, or just use the plugin I provide on my wiki. It automatically creates the CFA model from the Pattern matrix in the EFA.
Thank you, this is really helpful. I got the model in Knowledge management, it has 84 observed variables, --> 4 intermediate variables, --> 1 latent variables (my researching concept), is AMOS suitable, I saw my model as same as a cumbersome cotton wool!!!
Thanks so much for your great help. After reviewing all steps, I think the problem might lie on entering data step. I usually use four to five Liked scale questions for an item, and three to four items to calculate a variable. However, my only way is to use "mean" to calculate all items, which then go into squares in AMOS model. Could you please help me with an instruction on the issue if possible. You and the Internet are my only teachers. Thank in advance.
Like I said, you need to do a validity analysis, as described in my other video (referenced above). My opinion is irrelevant. There are certain thresholds that should be met. If you meet them, then you have a statistically solid model. If you don't meet them, then you need to adjust your model (perhaps through model fit) to remedy issues.
Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the TH-cam channel where we post a new video almost three times per week: th-cam.com/channels/iujxblFduQz8V4xHjMzyzQ.html
Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074
And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en
Check it out! Thanks!
I usually do at least the following three:
1. CMIN/DF: should be between 1-3 (this is a measure of absolute fit)
2. CFI: should be greater than 0.95 (this is a measure of relative fit)
3. RMSEA: should be less than 0.6 or so (this is a parsimony adjusted measure of fit)
Dear James, can you give some references for these fit indices. Thank you very much
@@ThanhBinhVu-mj8eg Here you go: statwiki.gaskination.com/index.php?title=References#Model_Fit
Sir how can i improve my RMSEA value if it's showing .000 after one modifications. Plz help
@@lalaliza315 RMSEA should be low. So, 0.000 is as "good" as you can get. No need to improve it.
@@Gaskination thank you so much sir for the reply...
May God (or whoever or whatever) bless you, Dr. Gaskin! You cou can spend weeks and months paying for courses that don't do anything except raise more questions & insecurities, and then comes along such a selfless individual who easily explains everything in a 10minute TH-cam video. I got teary-eyed when I realized I was finally understanding something. Instant subscribe.
Thanks for the kind feedback! Makes it all worth it :)
Hi Dr. Gaskin! You are helping many people. Do you know that?. After watching your tutorial, I can solve my problem. Thank you so much!!. Now I am preparing for final defense in this month. Thanks again!!. Wish you all the best!!.
Mike (from Taiwan)!
I am feeling and doing exactly what you did 8 years ago, wow. preparing for my final defense this month and here to thank Dr. Gaskin
Hello Mr. James! Thank you for this video. After learning amos dor 2 months, I can solve my problem related to Badness of fit of my mediation model. God bless!!!
Sir, from the day i began my tool standardization i had sincerely learned using AMOS through your videos. I post this here as a gratitude, since this was the first video of your i watched. And today, when i have doubts on CFA or SEM, I jus type your name on youtube. Thankyou so much.
Thank you for the video!
Dear James Gaskin,
Thanks for the wonderful online tutorial. I have a proposed model that includes 10 independent, 1 moderator and 2 dependent variables. While testing its “model fit” for the first time, I got GFI = 0.888 and the rest of the measures were just perfect. Then I performed following steps to improve GFI:
1) Co-varied the error terms of the same variables as per the modification indices. Then I got:
GFI=0.890, P-value = 0.123, CMIN/DF = 1.051, AGFI=0.871, CFI=0.991, RMSEA=0.012, PClose=1.0
2) Used your stats tool package “Fit check”. It suggested dropping an item. I dropped that item and got:
GFI=0.892, P-value = 0.158, CMIN/DF = 1.045, AGFI=0.873, CFI=0.992, RMSEA=0.011, PClose=1.0
3) Again used your stats tool package “Fit check”. It suggested dropping one more item. I dropped that item and got:
GFI=0.896, P-value = 0.236, CMIN/DF = 1.033, AGFI=0.877, CFI=0.994, RMSEA=0.010, PClose=1.0
As you can see, all the values were perfect except GFI. There was no significant improvement in the value of GFI even on performing these three steps. Kindly suggest how the value of GFI can be improved in such cases.
I usually have trouble with GFI when I have a lot of variables and/or when I have a lot fo sample size. These two things inflate the chi-square, and GFI is not robust to them. I would attribute it to model complexity in your case (not knowing the sample size).
James Gaskin Thanks a lot for the reply Dr. Gaskin.
Sharma et al. (2005), among many others, articulate that in, context of SEM using AMOS, given the issues with GFI, it has lost its popularity in recent years and therefore its use should be discouraged.
Despite this, recent research papers which use AMOS do consider GFI as a measure of model fit. What's your views regarding use of GFI?
Sharma, S., Mukherjee, S., Kumar, A., and Dillon, W.R. (2005), "A simulation study to investigate the use of cutoff values for assessing model fit in covariance structure models," Journal of Business Research, 58 (1), 935-43.
Sumedha Chauhan I never use it unless the reviewers specifically request it. I usually report CFI instead (as well as cmin/df, RMSEA).
Hello .. Thanks a lot for your prompt replies. I agree with you regarding three factors of Burnout (Third one is LACK OF PERSONAL ACCOMPLISHMENT) but my pilot surveys revealed that Lack of Personal Accomplishment does not effect employee burnout, so I have ignored it for the time being. I shall browse your formative construct and shall reconsider it .. Really greatful.
This is an acceptable approach. My recommendation is to use either a random sample of your data for each, or to use the same data that you will be using for the structural model. Most people do not have the luxury of an abundance of data, so they simply use everything they have for both EFA and CFA (which is what I have done here).
Dr. Gaskin, Thanks so much for the excellent step by step! It was priceless in my assistance with a graduate student soon to be PHD.
Thank you James! You saved my life for my Ph.D. homework.
Dr. Gaskin, just wanted to thank you for putting together such a great walk-through. I've had CFA / SEM covered in a few classes, but only conceptually. I've used your walk-through with some of the data from my dissertation, and it all makes so much more sense (and, I can actually do the analysis now). Thanks for sharing your knowledge!
@Gaskination Thank you so much. I am going to share your tutorials with students at Information Systems department at Georgia State University. Really appreciate the work Sir.
1. Sample size inflates the chi-square, but also inflates the CFI and RMSEA.
2. You could do that. Usually when we have more than 500 sample size, we split it for the EFA and CFA randomly.
Literature says the ideal is four items. Logic says the optimal 'minimum' is three (for stability's sake). Practice says that you can sometimes get away with only two (but this often leads to instability - e.g., standardized loadings greater than 1.00).
We can covary the error terms on the same factor; this just means the items are systematically correlated (probably due to wording or position in a survey). Covarying across factors implies the same thing, so we must be justified in covarying those (look at the wording of the items and see if they are too similar).
1. No need to transform. The likert values are fine.
2. These are called Heywood cases. You might try covarying the negative error term with another error term in that factor. If that doesn't work, then you might need to constrain the indicator weights to be equal (set their regression weight parameter to all equal "a" (without quotes)). If you do this for more than one factor, make sure the next one is "b", then "c", etc.
Hope this helps.
0.700 or higher is the generally agreed upon "good" factor loading, but really it depends on your sample size and measurement error. Hair et al "multivariate data analysis" has a table that discusses the minimum threshold based on sample size.
A great help for my Ph.D study. I didn't tie the covariances in the errors and have low values when discriminant validity is done.
go to the object properties of the error term (right click or double click the error term). Then go to the parameters tab. then type 0.05 or something similarly small in the Variance box.
@farispt
The theoretical basis is that they are reflective and interchangeable items, which means that they were probably worded very similarly, which means that they probably have a systematically related error (rather than a causal one). So, yes, you can covary the error terms as long as they are within the same factor.
The only thing I can think of is to try to find outliers. This doesn't just mean extreme values, it also means unengaged and erroneous responses. For example, if someone responds with all 3s or with 1, 2, 3, 4, 5, 1, 2, 3, etc... or if they respond to a reverse coded question the same way they respond to regular questions.
I would do it together. The main point of the CFA is to determine if you have distinct factors for measuring your constructs. In order to avoid tautological constructs, I would recommend putting them all together.
1. Hair et al 2010 "Multivariate Data Analysis" Table 12-4, p. 654
2. Wow! that is a lot! That's great! Just realize you might struggle achieving good fit, but this is not unexpected when you have such a large sample size.
1. Either use the default AMOS sets (first item for each factor) or place the constraint of 1 on the item that loaded the strongest in the EFA.
2. Covarying the error terms accounts for additional covariance represented in the covariance matrix. Model fit is the extent to which the proposed model (your CFA) accounts for the covariances in the data. So, naturally this will improve fit.
This is an estimation problem due to items having really high correlation/communalities. To fix this, you might want to try using maximum likelihood (as opposed to principal axis factoring, or principal components analysis). You might also attempt to constrain the number of factors to a few alternative models (like try six instead of five). You might also see if there are items that are nearly identical in their wording, and eliminate one of them. Hope these suggestions help.
@vyeniaras
This tutorial is meant to be a mechanical demonstration. So, trimming is really more subjective than I make it out to be. To meet the criterion for convergent validity (AVE>0.50, CR>0.70) then loadings on a single factor should at least average out to > 0.70. However, if you are working with established measures, and you're not worried about the validities, you can probably accept loadings as low as 0.30 (several references for this). Accepting low loadings may cause other problems.
These depend on many things. You want to decrease Chi-square while increasing degrees of freedom. The fastest way to do this is to address the modification indices. Also, you may have to identify items that are part of separate constructs but that are highly correlated. If possible, remove one of the two items in order to reduce cross loading.
omg. This is exactly what i'm looking for. Thanks for your videos, it's so helpful to solve my problem with model fit. Love you from Vietnam
Handling univariate outliers in CFA is same as EFA. Yes, you can perform CFA using same data as EFA, although if you have a lot of data, you can also randomly sample a portion for EFA and then use the other part to do CFA. This is considered more rigorous.
1. If they overlap, do one at a time.
2. the p-value in this case needs to be greater than 0.05. I know that sounds weird, but the null hypothesis is that it is not good fit, so if you have a significant p-value, then that means you have not good fit. However, when you have large sample sizes, or complex models, it is very difficult to achieve a non-significant p-value for the cmin.
I don't know if you need a reference for it. It is simply mathematical. If you increase the sample size, the chi-square increases. If the chi-square increases, the p-value decreases. So citing this would be like citing a more complex version of "1+1=2" :)
I have published a paper in springer a renowned journal using your SEM techniques and cite your statwiki. Thank you for your education vedieos..
Hi James. The procedure that you used: dealing with covariances' errors and covary errors by looking at the M.I.... can be explained saying as followed??: "you used the
expected parameter change (EPC) in combination with the modification index (MI) and the power of the MI test to detect model misspecifications" or it's not correct??
Thanks in advanced,
best regards
Yes, that is a correct statement. However, be aware that this practice of correlating errors is widely considered bad practice. Here is a bit of discussion on it: statwiki.kolobkreations.com/index.php?title=Citing_Claims#Covarying_Error_Terms_in_a_Measurement_Model
Excellent presentation. Clarity at its best. Thank you very much.
@AtyDeh
It definitely works. I'm sorry you are not able to watch it. There may be filter issues at your location. If you are trying to access it from work, then they probably block youtube. If you cannot access it from home, then you probably just need to try again another time. In the meantime, feel free to refer to my wiki for info on model fit: statwiki. kolobkreations. com
@ATarhini
I don't know if there is a right or wrong answer to this. I would include in the CFA all latent variables that I intended on using in my model. This would establish that they are distinct constructs. Moderators, in particular, should not be strongly correlated with the other variables in the model, so I would include them just to make sure they meet this criteria.
No, because 2nd order factors are not represented in the EFA. However, you can modify the CFA to accommodate a 2nd order factor after using the plugin.
@ATarhini It depends on if it is adding any value. If it is truly contributing to the factor, then keep it, if it is not, then drop it. covarying the error terms (which is what I assume you are talking about) is just one way to keep it without causing issues.
1. You can make a second order factor (I have a video about this), or you can just put all the items into a single factor.
2. This depends on what scales you are using. If you are using your own scales, then fine. But if you are using existing scales from another published paper, then you need good justification for removing those items. Also, you should make sure you do not compromise content validity (meaning of construct). Also, it is best to have 3 or more items per factor.
1. loading > 1 is okay if unstandardized, but not if standardized. If standardized and >1, then you might have a negative error variance that needs to be constrained to a small positive number.
2. That is the right order, although I almost never look at standardized residual covariances unless I simply cannot achieve good fit any other way.
either:
1. you are looking at UN-standardized values (instead, look at standardized) or
2. you have factors with only two indicators. This is unstable. You may need to fix the two indicators to have the same regression weight. You can do this by naming each of their regression weight parameters "a". (see the object properties for the line to assign these values)
No problem. It just means they are inversely correlated. This means that when one goes up, the other goes down. This is not a problem at all.
@stataguy The detailed answer is yes, you can covary any errors if there is a good reason for systematic correlation of residuals. However, if the correlation is due to a causal relationship (rather than similar wording - thus systematically related), then you should not covary them. In the video I try to keep it simple. Hope this helps. I can't think of a reference off the top of my head.
+Sabin Khadka, sorry for the delay in responding. TH-cam has decided to stop notifying content creators when users post comments... In AMOS you can do it: th-cam.com/video/dsOS9tQjxW8/w-d-xo.html
smellofstrings: The reply function on TH-cam video pages is not working currently, and TH-cam is trying to fix it. So, until then, I will reply to your questions directly on your Channel instead of here.
This means that you have a variable in your dataset called e1, but you are also trying to name a residual (error) e1. I recommend renaming all e1, e2, etc. to something else since e# is default for error terms.
It only translates the pattern matrix into a measurement model. If you want to tweak things afterwards to include a 2nd order factor, that is fine. But it can't detect that.
Dear @james gaskin, the video is really helpful! I have a short question: When I select for "modification indices" in my AMOS model and try to calculate it, I get the following error notification "modification indices cannot be calculated with incomplete data" - do you have a tip for me on how to overcome this and still be able to calculate the modification indices? I did tick the box at "estimate means and intercepts". Would be great to hear from you :) Thanks already!!
If you have missing data, then you cannot calculate modification indices. So, either you must impute that data in SPSS/Excel (and then uncheck 'estimate means and intercepts'), or you will need to work without modification indices.
@@Gaskination Thanks, I now found the missing data in SPSS and it worked out just fine :)
This video helped me a great deal with writing my master's thesis! Thank you!
Emanuele Fino If they still demonstrate sufficient discriminant validity, then just move forward. Otherwise, you might want to consider making them a 2nd order factor (reflective). This will probably turn out better results than a single factor with all items.
Thanks you!
James Gaskin if the data not good in all criteria lika cmin=10, Rmsea>0.08, the IFI, TLI,CFI >0.9 and AGFI, GFI >0.9, my result not good in all criteria. How could i do?
You can try this. The argument against it is that you need to definitively establish evidence that your IVs are not the same as your DVs (i.e., that there is sufficient discriminant validity) or else you are running up against an issue of tautological correlations (e.g., age predicts experience).
Yes, they will be different. We would test measurement hypotheses (which are uncommon) by examining convergent and discriminant validity and reliability, as well as model fit. We test structural hypotheses (much more common) by developing a causal model and examining the regression weights. Hope this helps. I have videos about this stuff too.
Hello James
Useful video, I have learned to conduct CFA with amos within 5-10 minutes. Thank you very much.
Kemal
@vyeniaras
Additionally, you may not want to let go of a certain item because it is crucial to your construct (however, this should not be an issue when using interchangeable items, as should be done for reflective constructs).
Each extraction method and rotation method serves different purposes. Principle components and varimax are the most lenient and easiest for obtaining a solution, but they also are prone to be too soft. I actually prefer Maximum Likelihood and Promax. Maximum likelihood is strict. Promax assumes correlated factors.
Hi James, is it acceptable when the TLI and NFI are close to 0.9, for example, 0.87?
All values in my analysis meet the goodness of fit but the TLI and NFI are not greater than 0.9.
Should I remove some more items so that it will be greater than 0.9? Or it is still acceptable if the values are close to 0.9?
Thanks again James
I never report TLI and NFI (I usually just use CFI, RMSEA/PClose, and SRMR). Model fit has many measures. If there is enough evidence that model fit is good, then a couple metrics suggesting it is borderline is probably not a problem.
James Gaskin thanks, you saved my life
Thank you so much for taking the time to post this! You may have literally saved my manuscript! I don't know why they don't teach us this in grad school or at least sell a good book on it.
+surfergirl0519 without any offense to the great work James has done by uploading this video, there is a book by Hair et al. named Multivariate Data Analysis. It is fantastic!
Hi James, I am still a bit confused about Modification index. If the Model fit is good and there are some MIs having large values, should I just leave the MI or should I covary the errors? Thank you
I would recommend avoiding covarying errors if at all possible. So, if model fit is already good, then don't covary the errors.
@@Gaskination Thank you :)
This may also be due to sample size. Large sample sizes artificially inflate the Chi-square. So, lower thresholds are acceptable for higher sample sizes. A "large" sample size is greater than 250. This is according to Hair et al 2010 "multivariate data analysis" book.
You just have to "estimate means and intercepts" in the Amos estimation tab. This will account for blank values. If you have replaced your blanks with a number, like "-99", Amos won't understand that. You will have to delete those values.
Hi James- Thanks for all your helpful comments. If I want to compute factor scores for each individuals participants/factors from the CFA model, how would I do it? While doing EFA, SPSS would allow you to save the factors scores (using regression, Bartlett and Anderson-Rubin) but AMOS won't. I found a blog where it suggest to standardize the original scores and muliply with factor loading scores and sum them up. Is this a correct approach? If yes, do you know of any literature that I can cite for this approach.
TIA
oh thanks for your time to reply my question, i figure it by myself, that there are some empty section there, the respondents doesnt fill it properly. ^^ thanks now i can finish my thesis, this video help me a lot
1. not recommended, but justified if the items have very similar wording or intent.
2. not unless the regression weight is very small. Look at the p-value. If it is significant, you can keep it.
3. I would not worry about standardized residual covariances unless absolutely necessary. and then, just do them one at at time starting with the highest (absolute value). My excel tool actually has a tab for identifying the most problematic item using the standardized residual covariances.
These three can all do it. There are others, but these are the most popular. Stay tuned for a cool new video about how to automatically produce a CFA model in AMOS using the pattern matrix from the EFA. Hopefully I'll create this video this week.
Likely this is because you have a factor with only two indicators. Is this the case? If so, does one of them have a standardized regression weight greater than 1.00? If so, you may need to constrain that regression weight to be equal to the regression weight of the other indicator by typing "a" (without quotes) in the regression weight box for that line, and then constraining the variance of the latent factor to equal 1. Hope this helps!
PLS was not meant for model fit because model fit is based on the covariance matrix, but PLS does not rely on the covariance matrix.
If you are aiming to confirm your factor structure, then I would run a bootstrap to determine the significance of the path weights/loadings for each item. However, if you ran a successful EFA, then you should have no problem conducting a CFA in AMOS using reflective factors.Then you could also assess model fit.
Rely on the pattern matrix. You might also try the Maximum Likelihood approach (instead of Principle Components Analysis or Principle Axis Factoring) because this is the algorithm that AMOS uses during the CFA.
Those decisions are driven by logic and theory. You should end up with as many factors as you intended during your theory development. Formative vs. reflective is determined by the relationship of the items within a factor (see Jarvis et al 2003 on specification). Second order is something we sometimes do when we have broader scope constructs (often formative - see Straub et al in MISQ I believe, not sure what year). Hope this helps.
Sounds like you already have sources to support your model fit. You don't need my additional confirmation. For thresholds I've put together, refer to my wiki: statwiki. kolobkreations. com
My apologies, for some reason I thought you were in the EFA stage. During the CFA, if you have this problem, it is usually because you only have two items on a factor (if that is the case, then try to constrain those two items regression weights to the same thing, like "fixed"). If that is not the case, then you might look at the loadings to see if one loading is in the opposite direction of the others. If so, then either constrain it, or remove it. I hope this helps.
As long as you have a minimum of three items on each factor, this is fine. Also, it depends a bit on if the scales come from published literature. If so, you may want to just check to see if others have also had problems with those items.
It is somewhat subjective. You want to keep all you can and still have good validities. The numeric cuttoff would be an average loading of 0.70. But that means you could have one of 0.90 and another of 0.50. You just want them to average out above 0.70.
Hi, Dr.Gaskin. First of all thank you for your informative videos, they are very helpful.
I am trying to do model fit on AMOS. I have 6 factors(It's Holland's RIASEC theory). Using SPSS, I calculated the cronbach alpha for each factor and they were all above .80. So i thought that in the CFA, the fit indices should also be high. But they were not: The TLI=.592, CFI=.612 and RMSEA=.068. I saw the suggestion on modification indices, however i am not sure how and why they works.I watched your videos, but i did not see one when you go back and modify the model. If you help, i will be extremely grateful. Thank you.
The modification indices identify correlations in the data that you have not accounted for. If you account for the large ones, they will improve your model fit (by reducing the chi-square).
It is because you are estimating means and intercepts (which you are probably doing because you have missing data). So, you'll need to impute the missing data in SPSS first, then save it and re-link it in AMOS, then run it again with the "estimate means and intercepts" box unchecked.
No, but there are some forum discussions about it on the smartpls website.
@ecmlau
I don't understand what you mean by reducing the model, unless you mean to trim off the items that are not correlating very well with others. I don't have links for that, other than this video. But the general rule is that you want the average standardized loadings from items within a latent factor to be higher than the correlations between factors.
Hi Mr Gaskin!
Thank you for all the lessons you provide through online media.
This time I want to ask about how to install estimands on Amos version 23, especially for the specific indirect effect of Amos estimands VB.
After I installed and followed the steps given in your tutorial, the estimates didn't appear in the plugins menu.
Is there a problem with that? please guide
Estimands do not appear in the plugins menu. You have to access them by clicking on the bottom left where it says "not estimating any user-defined estimands"
@@Gaskination thank you Mr Gaskin,
After I tried again according to the guidelines and suggestions from you, the results are available.
The output is also available once applied.
What you have developed is very helpful in the research process.
I hope you continue to work.
Thank you and always healthy
honestly, those values do not sound so bad. Maridia's cr is a bit strict. You can try the transformation to see if it helps. I cannot remember which transformation for which issue, and I don't have my books handy to investigate. If you look at Hair et al 2010 "Multivariate Data Analysis", that book has a chapter or section on which transformation to use and when.
Yes. You don't have a variable in your dataset called "nature of work". So, it can't be included as an observed variable (a rectangle). hope this helps.
There must either be some missing data, or you have blank rows at the bottom of your dataset. Otherwise you could uncheck estimate means and intercepts. I would not remove items with loadings greater than 0.600. The loadings just need to average out above 0.700 for each factor.
RMSEA is a parsimony-adjusted measure, so it can be improved by making a parsimonious model. This is not always doable in the measurement model. To do it in the structural model, you can remove paths that are not significant (and make sure you don't exclude paths that would be significant - as indicated by the modification indices). Modification indices will not tell you what to exclude, only what to include.
If you have good fit, then you can move forward. Don't worry about the high residuals. I almost never deal with high residuals unless it is a last resort.
Yes. that is fine as long as the standardized loadings are not outside of +/- 1.0.
I have also this massages,
1- An error occurred while checking for missing data in the group, group 1.
2-An error occurred while attempting to fit the satarted
3-Model for group, group number 1. it will not be possible to compute the Chi squared statistic.
4-An error occurred while attempting to fit the independence model for group, group number 1. it will not be possible to compute fit measures that require a model
It is more likely the high correlation. Try to constrain the regression weights of the paths between the first and second order factor to be equal (name them all "fixed" or "Jojo" or something - but name them all the same thing). This is a band aid. To really fix it, examine standard errors to see where the real problem is coming from. I have a video about this: "Iteration Limit Reached in AMOS" Hope this helps.
I'm not sure I understand the question. If you want to see how few items you can use to still result in a reliable latent construct, then you could use the new video I created to show how to improve reliability. But I'm not sure that was your question.
Ive some hiccups running a CFA on a validated scale.
1) I get a not positive definite covariance matrix on a CFA with 4 factors (each having 3-6 indicators, for a total of 21). I believe its because of unusually high correlations between all indicators, across factors).
2) I had 200+ df, so I correlated all the error terms for those items with significant correlations. Now I get an unidentified model. AMOS points to one factor (and its three items) as the "probably unidentified parameters"
The most likely reason is because the model didn't fully minimize or converge. The iteration limit was reached or something like that. Check out my video on reaching the iteration limit. Hopefully that will help.
hi Sabin Khadka , TH-cam wouldn't let me respond directly, so I'll respond here. You can simply constrain the error variance to a small positive number (like 0.05). This might fix it. If not, it is indicative or other underlying issues. You might remove an error covariance if you have one. Or you might identify if two items are too similar.
The key point is that "Minimization was unsuccessful". This means that AMOS couldn't find a good solution. I highly recommend watching my video called, "iteration limit reached in AMOS". If you still have trouble with it after applying what you learn in that video, feel free to email me. My email is listed on the channel home page and on my wiki. Hope this helps!
Usually the theoretical model includes regression lines between latent constructs (unless you are theorizing around a measurement model).
I have always had that box checked. Actually under Analysis Properties I check Minimization history, Standardised estimates, Residua Moments, and Modification indicies.
Great video James, very useful. I've a question. I know Amos does not provide the Satorra-Bentler Chi square for non-normal data, but instead offers bootstraping. Can you make a similar video highlighting bootstrapping in CFA with Amos?
I have a video on bootstrapping for mediation, but not with CFA. I'll see what I can do. I'll add that to my list of requested videos.
Thank you James, looking forward ....
This means that you do not have variables in the dataset you linked to amos that match the names you used in the boxes. One way to avoid this is to simply drag the variable names into the boxes, or just use the plugin I provide on my wiki. It automatically creates the CFA model from the Pattern matrix in the EFA.
Thank you, this is really helpful.
I got the model in Knowledge management, it has 84 observed variables, --> 4 intermediate variables, --> 1 latent variables (my researching concept), is AMOS suitable, I saw my model as same as a cumbersome cotton wool!!!
AMOS should be fine.
Thanks so much for your great help. After reviewing all steps, I think the problem might lie on entering data step. I usually use four to five Liked scale questions for an item, and three to four items to calculate a variable. However, my only way is to use "mean" to calculate all items, which then go into squares in AMOS model. Could you please help me with an instruction on the issue if possible. You and the Internet are my only teachers. Thank in advance.
Like I said, you need to do a validity analysis, as described in my other video (referenced above). My opinion is irrelevant. There are certain thresholds that should be met. If you meet them, then you have a statistically solid model. If you don't meet them, then you need to adjust your model (perhaps through model fit) to remedy issues.