Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the TH-cam channel where we post a new video almost three times per week: th-cam.com/channels/iujxblFduQz8V4xHjMzyzQ.html Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074 And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en Check it out! Thanks!
Hi James, I am writing to thank you for sharing your excellent teaching videos publicly. I learned to perform CFA using AMOS watching your videos during my Ph.D. I analyzed to datasets, wrote two papers, now both published. I really wanted to write an email to you to invite you as a coauthor because your role in those papers was significant. Keep sharing and enlightening individuals who all are working to make this world a better place. Cheers! Saurab
Thanks James for yet another wonderful and very enlightening video. Just to share and following the suggestions of Barbara Byrne in her book, the metric invariance analysis can also be carried out by using the multiple group analysis option. After specifying the groups and ensuring a configural invariance test is passed, click on Analyse, Multiple Group analysis(you may also select from the icons on the left of the screen),, a message pops up mentioning that some adjustments to the diagram may be made, click okay and an '8 by 8' matrix showing various constraining possibility is displayed. Three models are active. Uncheck all except the first model and first option since we are interested only in the measurement weight. Click ok and your diagram is displayed with various constraining. No problem as only the measurement weights are constrained and active. The default model is automatically changed to the unconstrained model while the other models 1, 2, and 3 are created. Double click on model 3, delete it and model 2 appears. Also delete model 2 and model 1 appears indicating the constraints. Optionally rename model 1 as unconstrained. Both models are now executed with same diagram. You only need to click on the desired model to get the chisquare scores as well as the DF for the metric invariance. I found this approach also quite useful for other types of invariance analysis especially in higher ordered models.
If commas turns out to be an issue, just run "set locale = english" via the syntax and it will give you dots instead of commas. By the way, the dots made it fail for me, it did work with commas.
I have got the EFA perfect loading, drew the CFA using the pattern builder, and then tried to run CFA, which is giving the text in computation summary as: Scanning [filename] Reading data What would have been the problem? Seeking your help. Thank you
Hello James, Thank you for your amazing videos. You help me a lot.
The reviewer of my research paper wrote the following question for me: “But there is a question here: the item loading in the CFA should be > 0.60, while in the SEM it should be > 0.708. You kept the items' loading times at 0.708 in the SEM; is there a reason for this?” In fact, I kept the items that have a factor loading > .50 because I thought that I should only delete items less than .50 in CFA and because their deletion does not improve the AVE (p.s. the AVE was less than .50 and I confirmed Convergent Validity through CR as suggested by Fornell & Larcker, (1981)) Please, I have few questions: 1. I need to know what is the cut off value for the item loading in the CFA, is it .50 or .6o or .708 as the reviewer mentioned? 2. If the cut off value is .50, what is the reference that confirms that I can keep the items with factor loadings greater than .50? 3. What is the reference that mentioned that items with factor loadings less than .60 should be removed from the model?
There is no loading cut-off published in reputable research. The "rule" is if the item completely undermines the validity of the factor, then you should delete it. Otherwise, it should stay. If AVE is less than 0.500, check the CR or cronbach's alpha. These can also be used as evidence for convergent validity.
Hey James, you are great at explaining. I never thought it is this simple. However, I have a question, can I run pre-validated, tested and approved academic model in SEM on with CFA analysis only ? Or do you think factors should go through refinement with EFA again ?
A non-trivial amount of debate exists among methodologists regarding whether the EFA is absolutely necessary, particularly when the set of observed measures have all been validated in prior literature (Costello and Osborne 2005), and when the factor structure is already theorized (i.e,. when we already expect, for example, that the jsat1-5 variables should factor together to represent the job satisfaction construct). As has been shown in replication studies (cf. Luttrell et al. 2017; Taylor et al. 2003), the same scales will perform differently when in the presence of other measures (not in the original study) or in the context of a different sample. Thus, in my view, the EFA should always be conducted to surface validity issues idiosyncratic to the current context. My personal school of thought is that it is best to do an EFA first because discriminant validity problems manifest more visibly in the EFA than the CFA. Then follow this up with a CFA informed by the EFA. The EFA is for exploration only, and should be used mainly to highlight potential problems (such as discriminant validity) that will likely resurface in the CFA.
Hi James, thanks heaps for sharing your knowledge here. It is highly appreciated. I've got a question ( pardon if it sounds too silly)- Can I introduce a new manifest variable as a dependent variable after CFA ( for when I do a causal model) ? Sorry, maybe haven't done enough research myself but so far I understand the answer is no , can you confirm that please?
yes, you can. Assuming it is a variable that wasn't measured latently, it can still be introduced after the CFA if it meets normality assumptions (or can be transformed to meet these assumptions).
Thank u Dr. James for sharing such a wonderful issue with academic community. Is GFI of greater than 0.80 acceptable if all others model fitness are OK? The other is to fit the model, is there a maximum number of co-vary done among error terms as per modification indices ?
GFI is an old measure that is not used very much anymore. I never report it. I recommend CFI, RMSEA (with PClose) and SRMR. These together are a good and robust measure of model fit.
Can I ask a question, please? When running CFA for each construct independently (before running CFA for the full model together), is it normal to have CFI values above 1.000? I remember this happened in one of your videos, but am not sure which one it was. Is there a logic for it and a way we could justify it in our paper? Lets say running CFA for a construct with 7 items, and we have a CFI value of 1.000 or higher. But other fit indices are excellent. How can we justify this? Any references to cite for it? Thank you
CFI cannot be greater than 1.00. There must be a calculation error. If you only have a single factor, that might cause the problem. AMOS always wants some correlations. So, if you have another factor you can add into the CFA.
Hi James, Thank you for your helpful videos on this topic. From what I understood, you use to same dataset for EFA and CFA. If we have the purpose of validating a particular instrument, shouldn't we split the dataset in order to avoid bias when testing the exploratory structure? Thanks
Yes, that is the more rigorous approach. Most do not have the luxury of a large dataset or multiple datasets, so it is most common to use the same dataset for both. and that is acceptable.
This is really helpful. Thanks James. In some of the papers that I read, they also do the 1st order and 2nd order analysis. I wonder why we need to do the 1st order and 2nd order analysis? Did you make any video about this? Thanks
2nd order is only when you have multidimensional factors comprised of sub-factors. If your data does not include these higher order multidimensional factors, then you do not need to do a 2nd order analysis.
How we can calculate the 'Mean and Standard Deviation' for the whole Latent construct as I have seen it in the past papers. They are reporting the 'Means and Standard Deviations' for whole individual Latent constructs. How?
Some software does produce means and standard deviations for latent factors. However, AMOS does not. If you would like them, then create an average for those items and then use Excel or SPSS to create means and standard deviations.
Hello James, Thank you for your amazing videos, they help me a lot. Please, I have a few questions that I really need answers for them. In my model, I have two constructs measured through seven-point semantic differential scale (because the original scale uses this scale) and the rest of my constructs measured through seven-point Likert-scale. I have three questions regarding that: 1. Can I run CFA for all my constructs together in the same measurement model despite the fact that I use two different scales to measure my constructs? 2. Can I convert the questions items of these two constructs from semantic differential scale to Likert-scale in order to measure all the constructs on the same scale or is it not right to do that? 3. Can I convert both scales (semantic differential scale and Likert-scale) from seven-points to five-points to make it easier to get answers from respondents or is it not right to do that? Thank you in advance.
1. Yes, you can use the two different scales in the same model because both have numeric escalation that matches the construct escalation. Just make sure to interpret the standardized estimates for only direction and amplitude, rather than trying to make some absolute interpretation of the unstandardized estimates. 2. Likert and semantic differential are very similar. They are both ordinal scales. 3. The only reason to keep it at seven point scale is if the original source authors make a compelling reason for why it needs to be seven points. If not, then shifting it to be five points to match the rest of your survey is fine.
Thank you James, I've learned I lot. I have one question...if I do the model manually, how can I import to AMOS the output of EFA analysis from SPSS? Thank you!
If you do it manually, then you do not need to import the EFA output. Instead, you just create the model based on the EFA output (i.e. by looking at the pattern matrix and then making a model that represents it).
One more question I need to ask that do we only have to check the CFI, RMSEA by ignoring the GFI and AGFI in Configural Invariance Test? Thanks. As in your video, GFI and AGFI were very low in Configural Invariance Test but CFI and RMSEA are passing.
Model fit is assessed using a cocktail of measures. Some use CFI, RMSEA, and SRMR, others use TLI, NFI, GFI, etc. All of these help to assess model fit. GFI and AGFI are just older and more swayable measures, so I avoid them. The good references, like Hu and Bentler, and Hair et al 2010, support this cocktail approach.
Hi, do we have to assess the data normality to use ML? I saw that we need to use bootstrapping in the case of nonnormal distribution of data, which is most likely the case with Likert scale data. any suggestions, please?
ML does assume normality (i.e., it is a parametric method). So, we should first assess our data for adequate normality. I usually use skewness and kurtosis as a guide, but not strict measures like Shapiro-Wilks or the K-S test.
Hi Prof Gaskin thank you so much for your videos, I am trying to do a project about countries' social entrepreneurship levels and their culture scores, and for that I did a cluster analysis with my data and for my model I want to do a CFA. The thing is, should I check the model for each cluster one by one or should I do it once with the complete dataset? My sample size is very small due to limited number of countries (only 43) and when I tried to put clusters one by one Amos didnt work it.
@@Gaskination But the problem is there are 6 cultural dimensions and scores vary a lot within the sample causing low model fit scores when i put them all together... I was thinking that i need to cluster them at least 10 unit sized samples and then do CFA so that model fit might get better for each cluster.
Do you have any video on the failure of 'Invariance test'. How we can solve this problem if two groups are significantly different from each other? Invariance failed
Great video again, thanks for putting in the youtube, regarding Validity and Reliability test, the new stats tools package shows completely different CR (all of them now less than 0.5 even!!!), what do you think is the problem? and can I just go back to ur old Stats tools and report the CR from there (which are all above 0.7)? And CMB, the old method (with CLF and without CLF and see the delta in STD estimation) showed no CMB problem and your new method the P value is very significant, do you think is better to keep CLF? does it make a difference in path analysis afterward? Thanks ahead
+Riad Cheikh Regarding CR, this new method is more accurate, as it accounts for the variance of the errors. That is unfortunate it has decreased your CRs so much. As for the CMB, the old approach had no literature support, whereas this new approach is supported by the literature. In this case, I would recommend retaining the CLF. The estimates will be different with and without it.
+James Gaskin thanks James for your reply, it's unfortunate that the new approach made my CR drop down badly (my new CR it's even less than 0.5), so in ur opinion how can I deal with it ? Is there any way to justify the old CR numbers (i used the cronbach alpha between items and the values was closed to my old CR values) i m now writing my findings chapter and i had all my analysis done (even i used the old cmb approach and CR approach to compare between the groups) do you I have to update all my analysis I hope it's not too much to ask I appreciate your patient with me and all your help
+Riad Cheikh I would recommend for the thesis to just retain all the old analyses. They were the best we had until this past weekend. This is often the way of things in science -- everything looks fine until we look back.
Sir, I have a question regarding the Heywood case. The whole model is good but getting heywood case in one latent variable in case of Female group during Invariance test. There is a solution that you can fix the regression weight of that variable and then go for solution. If I adopt this startegy, should I adopt this to Just Invariance (gender group) whole or just female? Any good reference...
You could constrain it to be equal, per this video: th-cam.com/video/Vx24KFf-rAo/w-d-xo.html As for a reference, probably the Byrne AMOS book would be fine.
Hi James! Congratulations for your excellent video. At 11.01 you say that correlations between the factors (indicating discriminant validity) should be less than about .8. Is this a rule of thumb for standardized or unstandardized coefficients? And what do you suggest in case a value above .8 occurs?
Thank you so much for your help James! I am a newbie, I notice you mention in your video that you have a plugin for a Pattern Matrix which looks like this is used when using the Maximum Likelihood method. However, I did a PCA and have a component matrix, will the plugin that transfers it from SPSS to SPSS AMOS work?
Yes, it should work, as long as the variable names (rather than labels) are shown in the rotated components matrix, and assuming you use decimals instead of commas for digit position (e.g., 0.789 instead of 0,789).
Hi James, I am having a problem , when I do validity test. I get a message that is Run-time error '91': Object variable or With block variable not set. The same excel sheet works completely fine with my old data. Any suggestions? Thanks
It worked, I used one of your comment given below: Make sure to comply with the caveats and assumptions listed in the box below the button. If you are not violating one of these, also make sure you have covaried all latent variables with all other latent variables. If this is not the problem, then perhaps you have a "latent" variable that only has one item - which means it is not really latent and should not be in the CFA. Thanks by the way.
Hi James, Thank you for the very comprehensive video. I have a question to ask, i have followed what you have done in your Exploratory Factor Analysis in SPSS but i do not know how to extract it to SmartPLS software. Is it possible to help me with this issue? What if the component is less than my variables in my conceptual framework what should i do? My research is focusing on Work-Life Balance and it has several construct which is used to explain WLB.
1. If you mean to use the EFA in SmartPLS, then you just have to recreate the measurement model manually. 2. You can constrain the EFA to extract an exact number of factors (rather than using eigenvalues as the criteria). Or you can just run the whole model in SmartPLS and skip SPSS.
Hi James, thank you for the extremely helpful video!! Just to make sure, the dataset you use include "different factors extracted from one measurement", e.g., playfulness, usefulness, atypical are different dimensions of one measurement, right? If I am measuring three independent variables (e.g., self-efficacy, satisfaction, support) using three single dimension measurements, each with 5 to 10 items, and want to check their validity, should I do three separate CFA for them? Hope I made it clear...Thank you!!
Hello James, Thank you so much for these wonderful videos. I have done CFA for my latent factors (5 indpendents and 1 dependent ) for a sample consists of 137 respondents. My final model fit indices were (CMIN/DF 1.513, CFI .952, GFI .819, AGFI .772, NFI .873, IFI .953, TLI ,945, RMSEA .062). Do these indices indicate goodness of fit of my model or not? Note: when i look at the modification indices it was fine and the big number which is 11.704 was belong to my latent dependent factor (not one of its variables).
thank you James. please I have another question. when I download your sheet which calculate Composite reability and AVE. after inserting my tables when I click the red button. it does not work and i got the following message: (unrecognized project language). What does it mean? and how can i make it work?
I've never seen that error. Make sure you are complying with the caveats and assumptions listed below the button. Or just use the plugin: th-cam.com/video/JqySgMU_qMQ/w-d-xo.html
My groups are different by sex so I started to delete the items with the largest deltas. Alas, every time I run the analysis I got the same result and I continued deleting items until all the items of a variable were all gone. At this moment it was clear that deleting items were not the way to reach invariant groups. Is there another method to reach invariant groups? In fact, I deleted a lot of items and the chi-square test remains significant.
Dang. That's unfortunate. To doublecheck the issue, you might try a levene's homogeneity of variance test (th-cam.com/video/E5VIKZU5kB8/w-d-xo.html) where gender is the factoring variable. If this also shows that the groups are variant across items, then you will just have to test your model without the grouping variable (i.e., no moderating by gender). If you absolutely must moderate by gender, then you'll have to list non-invariance as a limitation.
Respected Sir, while running structural model should we use measurement model's remaining item(after model fit ), or use complete items of questionnaire? Plz guid.
For the structural model, only use the items that remain after validating the factors. So, if an item was deleted during the measurement model validation, then it should remain deleted during structural model estimation.
My issue is I cannot run the analysis when I have Indices selected. It keeps telling me I have an error and I need to remove that selection. then, it will run. I am also having a not very appealing model fit even when I keep following the system. What does that mean? I have a sample size of 200.
Modification indices cannot be checked when you are also estimating means and intercepts (a check box in the estimation tab). So, uncheck that box unless you have missing data. If you have missing data, then you must estimate means and intercepts. If you cannot check modification indices, then look at standardized residual covariances. I think I have a tab for this in the stats tools package excel file on the homepage of statwiki.
Hi Dr. Gaskin, Will you please advise on a loading factor greater than 1? The EFA has shown that the loading value for one of the two items is greater than 1 but after running the EFA->CFA conversion the standardized loading is showing a value lesser than 1. Is the initial loading value>1 still the matter of concern?
Great video Dr. James. cleared all the doubts i had, except one. In case when we have second order factors, do we perform metric invariance test prior to adding the second-order factors?, and if yes, is it ok to proceed with the structural model later and test the multi-group invariance with the composites including the second-order factor??
Hi James, thanks for your video's, they are the best! I have dumb question haha, someone told me that you don't need to include the dependent variable when doing CFA. Is that true? Furthermore, for my model fit; all values are excellent except for cmin/df (it is a little over 14, sample size is over 8500). What do you recommend concerning that? Thanks, keep up the good work! Cheers from Amsterdam
+Polyvore That is a common myth. However, DV must be included so as to confirm you do not have a tautological relationship between IVs and DV. Must establish discriminant validity before you can test causal effects. The only exception is when your DV is not a reflective latent factor. As for cmin/df, I am not surprised it is that high, given your sample size. The cmin is inflated by sample size. If you were to run the same model with a random sample of 300, intead of all 8500, you would probably find a suitable cmin/df.
Hi James, sorry for another question, but what about if you have three different DV's (e.g., acquisition satisfaction, consumption satisfaction, possession satisfaction) that you'd like to show have discriminant validity (they do, yay!), etc.. but you'd like to insert into a causal model one at a time (to tell a simplified story). Would it be OK to use all three in the CFA, and then just use one at a time for the causal model? Or do I need to run three separate CFAs? I'd prefer not to run three separate CFAs, as it seems to still important to show reliability and validity for all three DVs, collectively. Thank you!
The way you propose is fine, for the reasons you mention. Need to show they are different, but then test separately. However, reviewers may say that you are hiding suppressing effects (when similar variables are included together in the same model). So, I would recommend doing also a model with all three in them. Stick this in the appendix to show how the three work when altogether.
Hello professor. Thank you for the video. If a DV is an observed variable, which is dollar amount, and all other IVs are latent variables, is it correct to just draw a rectangle in the model instead of ellipse? I did include this rectangle, observed variable, as a part of the model, but it seems that CFA does not include this factor when testing model fit. I am not sure whether it is right ways to do. Thanks!!
Oh, I should have not included the observed variable (DV) in the CFA. I will re-run the CFA and include it in the SEM analysis. Thank you for the reply!
Modification indices only appear if you check the box for them in the output tab of the analysis properties window. You must also not have any missing data, and you cannot be estimating means and intercepts (another checkbox in the analysis properties window).
@@Gaskination thank you for your responsiveness, but the problem persists. The modification indices do not appear even if I have checked my data (no missing values) so I checked the box Estimate the means and the interceptions.
@@renenob not that I'm aware of. It is a practice that most engage in because they don't have excess sample to draw from. So, unless the reviewers are asking for justification, I wouldn't bring attention to it. It is common practice.
Hi Dr. Gaskin, I am using AMOS 21. I tried testing common method bias using the method shared by you. While fully constrained model upon adding a CLF (using some alphabet like 'aa' on all paths rather than '0') I got the output but while running unconstrained model I didn't get any output and rather got following message: "The following covariance matrix is not positive definite ". How do I get through from here? Kindly comment.
Dear Professor Gaskin, I checked that and it shows me the following note, which is not of much help. If you can throw some light on this problem I would really appreciate: "The following covariance matrix is not positive definite : Amos can produce estimates of variances and covariances that yield covariance matrices that are not positive definite (Wothke, 1993). Such a solution is said to be inadmissible. Amos does not attempt to distinguish between a solution that is outside the admissible region and one that is on or near its boundary. Wothke, W. (1993). Nonpositive definite matrices in structural modeling. In Bollen, K.A. & Long, J.S. [Eds.], Testing structural equation models (pp. 256-293). Newbury Park, CA: Sage. For more, see the discussion of the message: "This solution is not admissible". "
@@MrDhunna8 Does it say anything about a negative error variance? Usually that's the problem. You're welcome to email me your .amw file and data file and I can check it. You can find my email by googling me.
No professor Gaskin. It doesn't say specifically about a negative error variance. Rest I will email you the file and you can share your understanding over there. Thanks for your help!
Hi James, thank you so much for the SEM Series, it is so helpful and you should be proud of it:) I have installed SPSS and AMOS on my computer and went through your SEM Series from the first one til here, but currently I am getting an error whenever I run the model on AMOS. The error is "The model is probably unidentified. In order to achieve identifiable, it will probably be necessary to impose 1 additional constraint". Even though I am using SPSS v. 23, I couldn't make it work your plugin on my computer so I have the model manually. I wanted to mention this maybe I am missing step which I dont know. Thanks in advance, if you need further information pleaseee let me know. Best, Merve
This error occurs when you are missing a path constraint on a latent variable. Every latent variable needs to have at least one indicator path constrained to 1. As for the plugin not working, it might be because of one of the following: 1. You have variable labels instead of variable names in the pattern matrix 2. You use commas instead of decimals 3. You are using a Mac
Thanks James for the very informative videos. I just have a question, is it necessary to do EFA if I'm using pre-developed scales with established validity and jump right to CFA? Thank you again
I always do EFA because it helps to identify the discriminant validity issues. However, some would say that EFA is not necessary if the scales are already established.
I always do EFA because it helps to identify the discriminant validity issues. However, some would say that EFA is not necessary if the scales are already established.
Hi James, just a question, what if we have hypothesized that the two multigroups are different and if the metric invariance result show a difference, do we still have to assess the difference between the two multigroups and delete the item with the biggest difference or it is only applicable, when you hypothesized that the two groups are same?
Usually a multigroup theory is about the causal relationships between variables, rather than the measurement of variables in latent factors. Metric invariance is about measurement differences, rather than causal differences.
Hello sir, i have one latent variable which has 5 indicator variable contains 25 items which means single indicator consist of 5 items, like for second latent variable but my third latent variable which does not have indicators rather contains 10 items , i am now in confused state how to peform CFA, the words reflective, formative, first order and second order constructs , i am unsure pls explain
You can do the EFA by just correlating all of the highest order factors. In some cases, this will be first order, but in others, it will be second order. Notice in this video how I do it with 2nd order: th-cam.com/video/HBQPqj63Y7s/w-d-xo.html
@@Gaskination I have 15 items in one latent construct, but the hair book , he suggests that more items are not necessarily better, he recommends per construct at least three or four items is suffice. My question is how to perform CFA when we have 15 items in once construct. My another question, how this is possible to choose a specific construct which has only 3 constructs. What if our model demands to choose more items in one construct. How to perform CFA when we have multidimensional measure. suppose service quality has five dimensions like empathy, reliability, responsiveness . Each dimension has 5 items , which means 15 measured items. Another questions suppose if we use multidimensional measures how to check construct validity. construct validity for Service Quality with sub dimensions or subdimensons with measures. pls explain
@@hemakumarm9662 I would recommend you check the dimensionality of your factor by conducting an EFA first. This may show that your 15 items actually factor into multiple dimensions. Then you can model this as a second order factor in your CFA. Here is a video on second order factors in AMOS: th-cam.com/video/HBQPqj63Y7s/w-d-xo.html. If modeling the 2nd order factor reflectively, then the same validation criteria applies as for first order reflective factors. If modeling formatively, then you'll need to use SmartPLS (or some other PLS tool) because AMOS doesn't do formative factors.
Thanks for the CFA videos. They're really helpful. I've been looking around and I can't seem to find one thing though. Is there any way in AMOS to force the correlation to be 1 so I can compare a model where I'm forcing two variables to be perfectly correlated to another model where I'm letting it be free?
Yes. In the model, doubleclick on the covariance arrow. This will open the object properties window. Go to the Parameters tab. Then set the Covariance to 1.
Oops, I meant to say covariance. Thanks for the input. However, I'm trying to force the covariance to be 1 for the standardized estimates. Doing what you suggested just makes it 1 for the unstandardized. Do you know how to force the covariance to be 1 for the standardized estimates? I want to compare two models where one has two unobserved variables and where one makes these two unobserved variables to be the same thing by forcing the covariance to be 1. I would just have all the observed variables have paths to one unobserved variable, but I need the degrees of freedom to equal and this would nullify that. Thanks
It is the same. The unstandardized value will be 1, which is constrained. That means the correlation is also constrained (although the value will differ from 1).
Dear Prof. Gaskin, I see that in your website you had moved all the plugins into a single dropbox folder. However, I can's seem to be able to locate the EFA-CFA plugin within that folder. Your help would be highly appreciated. Thank you very much in advance
Hi James, I am using Windows 10. Is there a plug in available? I am pretty sure I did put the plugins to right place but they are not showing up when I open Amos...
Make sure you have the correct plugin (version 23 or lower, or version 24+). Also, make sure you have checked the box for 'unblock' in the file properties (if it is an option). Also make sure you are running amos as administrator (right click amos graphics exe file and select 'run as administrator'). It should work if all these things are done. If not, then it may be some other issue I haven't discovered yet.
Mr Gaskin, first of all, thank you for the helpul videos. I would like to ak you where can i see the residual variances of the variables in the AMOS output? Thank you so much for your time!
Sorry for the delayed response. TH-cam classified this as spam for some reason... The residual variances are in the variances table in the estimates-->scalars section of the output.
Hi, I have a question. I found that my model is not invariant. However, do I need to fix not invariant problem if my research want to test the moderated effect of gender? Thank you :)
If you want to do a multigroup test during the structural model, then you need to meet the assumption of measurement invariance during the measurement model. If you cannot meet full invariance, try for partial invariance (at least 1 indicator path invariant per factor).
Hallo James, thank you for your explanations! I have a question regarding the pattern matrix builder plugin. I have amosv24 trial Version and the plugin folder looks Different from yours. where do i have to install the plugin to make it work?
You can do it in pairs. The human mind is not good at comparing more than two things at once. Similarly, our algorithms are best at comparing in pairs. So, if you have three groups, you can do A:B, A:C, B:C or A:BC, B:AC, C:AB.
Hi Prof. James, thank you so much for the SEM Series, it is so helpful! I have a question. You show how to do Configural Invariance & Metric Invariance. Then, how can we do Scalar Invariance? Thank you very much!
Scalar invariance involves constraining the intercepts. I don't have a video on it, but the procedure is the same as with constraining regression weights.
Hi James! When I checked for metric invariances I obtained a Pvalue = 1.00. Suspicious, I found the difference between std regression weights and found a difference of .009. So this means your tool package rounds up/down? And if so, is it more accurate to report my p value as .991? (assuming i did this correctly)Thank you.
If you are finding differences in regression weights between groups, this implies it is set up correctly and there is not likely an error (unless you accidentally reported the wrong chi-square or df). The chi-square difference test in the stats tools package does round to three decimal places. You can always extend that in Excel to see what the actual value is.
Thanks, got it. Is it expected of me to just report the P as 1.00 even though the difference is .00899.. or subtract that number from 1 and report that?
Hello sir, ur videos are really good! Do u have any source for interpreting factor loading of second order confirmatory factor analysis. I mean what does high factor loading means?
Dear Sir, when we check model fit if we get p value less than 0.05 we do not say it is not a good model fit. because p value is affected by the sample size. Is there any literature supporting this point? Can you please comment about this?
The p-value for the chi-square is a test for whether the proposed model (your model) is different from the observed model (the model inherent in the data and variable relationships). So, a significant (
Dear Prof Gaskin, when I run my CFA, the model fit indices are adequate however, the covariances between the factors is negative. I have tried to read about this but could not get any material on this. Pls advise if I can go ahead with the next steps of CFA or is my model inappropriate. Thanks!
Negative covariance is totally fine. It just means there is an inverse relationship between the two factors. For example, if I have a factor for job satisfaction and a factor for burnout, these would have an inverse relationship because as burnout increases, job satisfaction decreases.
AMOS requires you either impute the missing values (like with the mean or median value), or delete those rows. This is only a requirement if you need to do modification indices or bootstrap.
@@nasirislam2390 Yes, there are different thresholds. Above .5 makes sense for constructs that should be related. Less than 0.8 makes sense if we want them to be discriminant.
Gratitude sir. Sir I used multigroup analysis plugin. However I am getting difficulties in interpretation. Do you have any research articles used MGA. Thanks
I've changed the link to include all plugins and estimands. So, just left-click on the link for plugins and estimands, and then it will take you to a folder that contains all of them.
Hello Sir, Thank you for your videos and guidance. I am a PhD candidate and I seek your help to answer my following questions: 1. Is the MI between uniqueness and factor considered as an issue in further analysis? If yes how it can be solved? 2. The GFI value for my model is 0.857, is it acceptable or it should be more than 0.9? FYI ( My model contains 62 items, 13 constructs and the sample size is 418 respondents) Thank you so much.
1. Not sure what you mean. However, generally speaking, we seek to address modification indices by removing and adding paths and variables, rather than by covarying errors with factors (not sure if that answers your question, since I wasn't sure what your question was). 2. Check instead the CFI, RMSEA, and SRMR. GFI is a bit old and can be biased by sample size and model complexity.
When I try to create the diagram using the Plugin, I receive an error message saying 'There are no observed variables in the model. That is, none of the variable names in the model specification could be found in the data file.' I was working alongside your SEM Series so far, I do not understand what to make of this error message. The variables have names and labels in SPSS, I made sure for SPSS to output names, not labels. Am I missing something, or will it just not work for me?
That is unusual. The only other reason that would happen is if you linked the wrong data file in AMOS. If that is not the reason either, then I'm not sure...
Hi ElEscobar, did you find the solution? I have the same problem 'There are no observed variables in the model. That is, none of the variable names in the model specification could be found in the data file.' when developing the pattern matrix. Interestingly, I have the same error message when using the plugin for version 24 (or higher) or version 26 (or higher). I tried everything: renaming variables, I adjusted SPSS to names and lables, I reinstalled/matched different softwares: SPSS24/AMOS24, SPSS26/AMOS26 --> nothing works. I tried to run previously developed pattern matrices (SPSS24/AMOS24) which worked fine. Now: Not working anymore. I tried to develop same pattern matrix as before. The only thing I receive is this specific error message. I kindly would like to touch base with you if you found a solution. There is also no specfic solution published somewhere on the web. Thank you very much for your help. Best wishes,
Hi sir I have got few issues my sample size is 4000 and I have 5 factors with 10 or 11 variables. My cmin/df is above 60 I have tried Mod Indcs but its not going down and there are several Mod Indcs. The EFA ran perfectly with no disc. val issues. Kindly suggest me some way. regards
ok sir. Plz do tell... what about the mod indices? there are several.plz direct me to any videos regarding managing of large samples....thank u again. regards
There will be so many modification indices because the chi-square is inflated (and MIs are a function of chi-square). So, again, I would recommend either taking a random sample of 500, or just ignoring any metrics based on the chi-square. Instead look at the RMSEA, CFI, and SRMR.
hi james, first of all thank you for your videos! i have a problem trying to run my model in amos, it only says: scanning an then reading data and never runs my model, what can i do?
Weird. Check the output to see if there are any notes for it. If the output is also not available, then you might check the EFA again. Make sure you have also checked data screening issues.
Hello Sir, Thank you for great videos with explanations. I am doing my thesis for Master's Degree and I have a model which has single item construct, measures with Five-Point likert scale as the DV. So do I need to add it when I do EFA and CFA?? Can I do EFA and CFA only for other independent variables?? Because I added it when I do EFA but there were no values in pattern matrix. So I think I did wrong. Please help me with this issue. :)
Gauri Prabhani it does not belong in the EFA or CFA. You should only include latent constructs in those analyses. Because this has only one indicator, it is observed rather than latent. You can bring it in just during the causal model or structural analysis.
I have 7 factors and for the last factor there are two items ...when i am clicking calculate estimates ..standardized estimate not appearing for last factor which has two items... If standardized regression weights come more than 1 for any item...is it problematic?
that is unexpected. If estimates are produced for any of the items, then they should be produced for all. As for standardized estimates greater than 1.00, yes, that is a problem. Here is a video about it: th-cam.com/video/Vx24KFf-rAo/w-d-xo.html
Hi James thanks for your reply....the second problem is solved through the video...but for the last two items unstandardized estimates are co mung but not standardize estimates are not appearing
Make sure to comply with the caveats and assumptions listed in the box below the button. If you are not violating one of these, also make sure you have covaried all latent variables with all other latent variables. If this is not the problem, then perhaps you have a "latent" variable that only has one item - which means it is not really latent and should not be in the CFA.
hello sir, i hope you are fine. 1) can we do EFA and CFA on the same data of sample ? 2) or we divide our data half on EFA half on CFA. 3) or after EFA we collect new data from same population for CFA?
1. Yes, although it is considered more rigorous to do it on separate samples if you have sufficient data. Most do not have sufficient data, so they use the same data for both. 2. Only if you have enough data (usually more than 400). 3. If possible, that would be awesome.
Dear James, first thank you soo much for these wonderful videos. I have done CFA and my all model fit indicators are fine except Standardized RMR which is 0.0942. How can i keep it below 0.08 threshold level? My sample size is 400. Waiting for your reply. Best regards Syed
You can try looking at the standardized residual covariances to see if there is an item that is causing most of the error. I think I show how to do that in this one: th-cam.com/video/JkZGWUUjdLg/w-d-xo.html
@@Gaskination Dear Sir.. Thank you so much for the wonderful videos and comments/answers. I have one question. Is it better to drop the item or covariate the errors? (REFERENCE/SOURCE PLEASE). I have read somewhere that covariating the error terms can only be allowed in longitudinal studies. What is better option? Secondly, in the threshold for modification indices you have entered 20. Can we enter 15 or 10.....?.(REFERENCE/SOURCE PLEASE).
@@drmuhammadwahab I'm terrible with citations... It is better to drop an item if you have multiple superfluous reflective items (they are intentionally redundant). However, if removing the item would lead to having 3 or fewer items, it may be better to covary the error term. As for modification indices, the size of these is based on the chi-square size. The chi-square size is based on model complexity. So, for a complex model, it doesn't make any sense to keep the original threshold of 4. For a simple model, it doesn't' make any sense to set the threshold so high.
Respected sir, Can u please tell me how the value of Chi-square is calculated in CFA? Can u explain with some simple example? I will be very grateful to you...
It is like subtracting the predicted covariance matrix (representing the proposed model) from the obsereved covariance matrix (natural to the data). The bigger the difference, the larger the chi-square. Differences come from unestimated parameters (relationships).
Dear Sir, Sorry to disturb you again. Can you please help me find article reference for cutoff of cmin/ df (Chi-sqaure/ df). I used (Tabachnick & Fidell, 2007, p. 715) but it have
I have a references section on StatWiki: statwiki.kolobkreations.com/index.php?title=References as for a reference for the cutoff, most model fit references come from: Hu and Bentler (1999)
Hi James, thanks for your sharing. I am using amos to do CFA and I encounter some problems.I connected the factors but there were no statistical estimates for the factors except 1. This is the image I created: ibb.co/m7ofZ8 it would be much appreciated if you could answer my question. Thank you.
Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the TH-cam channel where we post a new video almost three times per week: th-cam.com/channels/iujxblFduQz8V4xHjMzyzQ.html
Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074
And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en
Check it out! Thanks!
Hi James,
I am writing to thank you for sharing your excellent teaching videos publicly. I learned to perform CFA using AMOS watching your videos during my Ph.D. I analyzed to datasets, wrote two papers, now both published. I really wanted to write an email to you to invite you as a coauthor because your role in those papers was significant. Keep sharing and enlightening individuals who all are working to make this world a better place.
Cheers!
Saurab
Hello Sir
I am a research scholar also learning Factor analysis and SEM.Can you help me too
Thanks James for yet another wonderful and very enlightening video. Just to share and following the suggestions of Barbara Byrne in her book, the metric invariance analysis can also be carried out by using the multiple group analysis option. After specifying the groups and ensuring a configural invariance test is passed, click on Analyse, Multiple Group analysis(you may also select from the icons on the left of the screen),, a message pops up mentioning that some adjustments to the diagram may be made, click okay and an '8 by 8' matrix showing various constraining possibility is displayed. Three models are active. Uncheck all except the first model and first option since we are interested only in the measurement weight. Click ok and your diagram is displayed with various constraining. No problem as only the measurement weights are constrained and active. The default model is automatically changed to the unconstrained model while the other models 1, 2, and 3 are created. Double click on model 3, delete it and model 2 appears. Also delete model 2 and model 1 appears indicating the constraints. Optionally rename model 1 as unconstrained. Both models are now executed with same diagram. You only need to click on the desired model to get the chisquare scores as well as the DF for the metric invariance. I found this approach also quite useful for other types of invariance analysis especially in higher ordered models.
+Abideen Adewale Excellent! I thought this might also work, but didn't try it. Thanks!
Your videos are so helpful. I appreciate your generosity of sharing your knowledge with us.
Great effort. U made my life easy. I was struck in reading IBM SPSS Statistics by Andy Field. So easy to learn SPSS and AMOS through these videos.
I totally agree. Believe it or not, I learned to use SPSS and AMOS through TH-cam more than any book. And today I have a PhD and publications :)
If commas turns out to be an issue, just run "set locale = english" via the syntax and it will give you dots instead of commas. By the way, the dots made it fail for me, it did work with commas.
I have got the EFA perfect loading, drew the CFA using the pattern builder, and then tried to run CFA, which is giving the text in computation summary as:
Scanning [filename]
Reading data
What would have been the problem?
Seeking your help.
Thank you
Glad you got this figured out (per your email)
Prof @@Gaskination, the credit goes to an excellent teacher from whom I learnt the basics. Thank you so much.
Hello James,
Thank you for your amazing videos. You help me a lot.
The reviewer of my research paper wrote the following question for me: “But there is a question here: the item loading in the CFA should be > 0.60, while in the SEM it should be > 0.708. You kept the items' loading times at 0.708 in the SEM; is there a reason for this?”
In fact, I kept the items that have a factor loading > .50 because I thought that I should only delete items less than .50 in CFA and because their deletion does not improve the AVE (p.s. the AVE was less than .50 and I confirmed Convergent Validity through CR as suggested by Fornell & Larcker, (1981))
Please, I have few questions:
1. I need to know what is the cut off value for the item loading in the CFA, is it .50 or .6o or .708 as the reviewer mentioned?
2. If the cut off value is .50, what is the reference that confirms that I can keep the items with factor loadings greater than .50?
3. What is the reference that mentioned that items with factor loadings less than .60 should be removed from the model?
There is no loading cut-off published in reputable research. The "rule" is if the item completely undermines the validity of the factor, then you should delete it. Otherwise, it should stay. If AVE is less than 0.500, check the CR or cronbach's alpha. These can also be used as evidence for convergent validity.
Hey James, you are great at explaining. I never thought it is this simple. However, I have a question, can I run pre-validated, tested and approved academic model in SEM on with CFA analysis only ? Or do you think factors should go through refinement with EFA again ?
A non-trivial amount of debate exists among methodologists regarding whether the EFA is absolutely necessary, particularly when the set of observed measures have all been validated in prior literature (Costello and Osborne 2005), and when the factor structure is already theorized (i.e,. when we already expect, for example, that the jsat1-5 variables should factor together to represent the job satisfaction construct). As has been shown in replication studies (cf. Luttrell et al. 2017; Taylor et al. 2003), the same scales will perform differently when in the presence of other measures (not in the original study) or in the context of a different sample. Thus, in my view, the EFA should always be conducted to surface validity issues idiosyncratic to the current context. My personal school of thought is that it is best to do an EFA first because discriminant validity problems manifest more visibly in the EFA than the CFA. Then follow this up with a CFA informed by the EFA. The EFA is for exploration only, and should be used mainly to highlight potential problems (such as discriminant validity) that will likely resurface in the CFA.
@@Gaskination This is very useful information. Thank you so much for the references as well
Hi James, thanks heaps for sharing your knowledge here. It is highly appreciated.
I've got a question ( pardon if it sounds too silly)- Can I introduce a new manifest variable as a dependent variable after CFA ( for when I do a causal model) ?
Sorry, maybe haven't done enough research myself but so far I understand the answer is no , can you confirm that please?
yes, you can. Assuming it is a variable that wasn't measured latently, it can still be introduced after the CFA if it meets normality assumptions (or can be transformed to meet these assumptions).
Thank u Dr. James for sharing such a wonderful issue with academic community. Is GFI of greater than 0.80 acceptable if all others model fitness are OK? The other is to fit the model, is there a maximum number of co-vary done among error terms as per modification indices ?
GFI is an old measure that is not used very much anymore. I never report it. I recommend CFI, RMSEA (with PClose) and SRMR. These together are a good and robust measure of model fit.
Can I ask a question, please? When running CFA for each construct independently (before running CFA for the full model together), is it normal to have CFI values above 1.000? I remember this happened in one of your videos, but am not sure which one it was. Is there a logic for it and a way we could justify it in our paper? Lets say running CFA for a construct with 7 items, and we have a CFI value of 1.000 or higher. But other fit indices are excellent. How can we justify this? Any references to cite for it? Thank you
CFI cannot be greater than 1.00. There must be a calculation error. If you only have a single factor, that might cause the problem. AMOS always wants some correlations. So, if you have another factor you can add into the CFA.
Dear James thank you for your informative videos. For to do practice, would you please share the data set, you analyse on this video?
The data for most of my videos can be found on the statwiki: statwiki.gaskination.com/
Hi James,
Thank you for your helpful videos on this topic. From what I understood, you use to same dataset for EFA and CFA. If we have the purpose of validating a particular instrument, shouldn't we split the dataset in order to avoid bias when testing the exploratory structure? Thanks
Yes, that is the more rigorous approach. Most do not have the luxury of a large dataset or multiple datasets, so it is most common to use the same dataset for both. and that is acceptable.
This is really helpful. Thanks James.
In some of the papers that I read, they also do the 1st order and 2nd order analysis. I wonder why we need to do the 1st order and 2nd order analysis? Did you make any video about this? Thanks
2nd order is only when you have multidimensional factors comprised of sub-factors. If your data does not include these higher order multidimensional factors, then you do not need to do a 2nd order analysis.
How we can calculate the 'Mean and Standard Deviation' for the whole Latent construct as I have seen it in the past papers. They are reporting the 'Means and Standard Deviations' for whole individual Latent constructs. How?
Some software does produce means and standard deviations for latent factors. However, AMOS does not. If you would like them, then create an average for those items and then use Excel or SPSS to create means and standard deviations.
Hello James,
Thank you for your amazing videos, they help me a lot.
Please, I have a few questions that I really need answers for them.
In my model, I have two constructs measured through seven-point semantic differential scale (because the original scale uses this scale) and the rest of my constructs measured through seven-point Likert-scale. I have three questions regarding that:
1. Can I run CFA for all my constructs together in the same measurement model despite the fact that I use two different scales to measure my constructs?
2. Can I convert the questions items of these two constructs from semantic differential scale to Likert-scale in order to measure all the constructs on the same scale or is it not right to do that?
3. Can I convert both scales (semantic differential scale and Likert-scale) from seven-points to five-points to make it easier to get answers from respondents or is it not right to do that?
Thank you in advance.
1. Yes, you can use the two different scales in the same model because both have numeric escalation that matches the construct escalation. Just make sure to interpret the standardized estimates for only direction and amplitude, rather than trying to make some absolute interpretation of the unstandardized estimates.
2. Likert and semantic differential are very similar. They are both ordinal scales.
3. The only reason to keep it at seven point scale is if the original source authors make a compelling reason for why it needs to be seven points. If not, then shifting it to be five points to match the rest of your survey is fine.
Thank you James, I've learned I lot. I have one question...if I do the model manually, how can I import to AMOS the output of EFA analysis from SPSS? Thank you!
If you do it manually, then you do not need to import the EFA output. Instead, you just create the model based on the EFA output (i.e. by looking at the pattern matrix and then making a model that represents it).
One more question I need to ask that do we only have to check the CFI, RMSEA by ignoring the GFI and AGFI in Configural Invariance Test? Thanks.
As in your video, GFI and AGFI were very low in Configural Invariance Test but CFI and RMSEA are passing.
Model fit is assessed using a cocktail of measures. Some use CFI, RMSEA, and SRMR, others use TLI, NFI, GFI, etc. All of these help to assess model fit. GFI and AGFI are just older and more swayable measures, so I avoid them. The good references, like Hu and Bentler, and Hair et al 2010, support this cocktail approach.
Hi, do we have to assess the data normality to use ML? I saw that we need to use bootstrapping in the case of nonnormal distribution of data, which is most likely the case with Likert scale data. any suggestions, please?
ML does assume normality (i.e., it is a parametric method). So, we should first assess our data for adequate normality. I usually use skewness and kurtosis as a guide, but not strict measures like Shapiro-Wilks or the K-S test.
Hi Prof Gaskin thank you so much for your videos, I am trying to do a project about countries' social entrepreneurship levels and their culture scores, and for that I did a cluster analysis with my data and for my model I want to do a CFA. The thing is, should I check the model for each cluster one by one or should I do it once with the complete dataset? My sample size is very small due to limited number of countries (only 43) and when I tried to put clusters one by one Amos didnt work it.
With such a low sample size, you must keep the data together, rather than split it.
@@Gaskination But the problem is there are 6 cultural dimensions and scores vary a lot within the sample causing low model fit scores when i put them all together... I was thinking that i need to cluster them at least 10 unit sized samples and then do CFA so that model fit might get better for each cluster.
@@tugkanbas4059 Sounds like you need more data.
start of CFA with pulg-in 5:55
Obtaining model fit 8:15
Initial results 10:13
Model fit values 11:15
Modification indices 12:25
Invariance tests 14:40
Do you have any video on the failure of 'Invariance test'. How we can solve this problem if two groups are significantly different from each other? Invariance failed
Here are some videos on partial invariance: th-cam.com/users/Gaskinationsearch?query=partial%20invariance
Great video again, thanks for putting in the youtube, regarding Validity and Reliability test, the new stats tools package shows completely different CR (all of them now less than 0.5 even!!!), what do you think is the problem? and can I just go back to ur old Stats tools and report the CR from there (which are all above 0.7)?
And CMB, the old method (with CLF and without CLF and see the delta in STD estimation) showed no CMB problem and your new method the P value is very significant, do you think is better to keep CLF? does it make a difference in path analysis afterward?
Thanks ahead
+Riad Cheikh Regarding CR, this new method is more accurate, as it accounts for the variance of the errors. That is unfortunate it has decreased your CRs so much. As for the CMB, the old approach had no literature support, whereas this new approach is supported by the literature. In this case, I would recommend retaining the CLF. The estimates will be different with and without it.
+James Gaskin thanks James for your reply, it's unfortunate that the new approach made my CR drop down badly (my new CR it's even less than 0.5), so in ur opinion how can I deal with it ? Is there any way to justify the old CR numbers (i used the cronbach alpha between items and the values was closed to my old CR values)
i m now writing my findings chapter and i had all my analysis done (even i used the old cmb approach and CR approach to compare between the groups) do you I have to update all my analysis
I hope it's not too much to ask
I appreciate your patient with me and all your help
+Riad Cheikh I would recommend for the thesis to just retain all the old analyses. They were the best we had until this past weekend. This is often the way of things in science -- everything looks fine until we look back.
Sir, I have a question regarding the Heywood case. The whole model is good but getting heywood case in one latent variable in case of Female group during Invariance test. There is a solution that you can fix the regression weight of that variable and then go for solution. If I adopt this startegy, should I adopt this to Just Invariance (gender group) whole or just female? Any good reference...
You could constrain it to be equal, per this video: th-cam.com/video/Vx24KFf-rAo/w-d-xo.html
As for a reference, probably the Byrne AMOS book would be fine.
Hi James! Congratulations for your excellent video. At 11.01 you say that correlations between the factors (indicating discriminant validity) should be less than about .8. Is this a rule of thumb for standardized or unstandardized coefficients? And what do you suggest in case a value above .8 occurs?
ioansit standardized. If over .80 then they are likely part of a higher order construct.
Thank you so much for your help James! I am a newbie, I notice you mention in your video that you have a plugin for a Pattern Matrix which looks like this is used when using the Maximum Likelihood method. However, I did a PCA and have a component matrix, will the plugin that transfers it from SPSS to SPSS AMOS work?
Yes, it should work, as long as the variable names (rather than labels) are shown in the rotated components matrix, and assuming you use decimals instead of commas for digit position (e.g., 0.789 instead of 0,789).
James Gaskin
Thanks!
Hi James, I am having a problem , when I do validity test. I get a message that is Run-time error '91': Object variable or With block variable not set. The same excel sheet works completely fine with my old data. Any suggestions? Thanks
It worked, I used one of your comment given below: Make sure to comply with the caveats and assumptions listed in the box below the button. If you are not violating one of these, also make sure you have covaried all latent variables with all other latent variables. If this is not the problem, then perhaps you have a "latent" variable that only has one item - which means it is not really latent and should not be in the CFA.
Thanks by the way.
Hi James,
Thank you for the very comprehensive video. I have a question to ask, i have followed what you have done in your Exploratory Factor Analysis in SPSS but i do not know how to extract it to SmartPLS software. Is it possible to help me with this issue?
What if the component is less than my variables in my conceptual framework what should i do? My research is focusing on Work-Life Balance and it has several construct which is used to explain WLB.
1. If you mean to use the EFA in SmartPLS, then you just have to recreate the measurement model manually.
2. You can constrain the EFA to extract an exact number of factors (rather than using eigenvalues as the criteria). Or you can just run the whole model in SmartPLS and skip SPSS.
Hello Where I can find AMOS EFA-CFA plugin on stat wiki page? I suppose layout of the page is changed now.
Hi James, thank you for the extremely helpful video!! Just to make sure, the dataset you use include "different factors extracted from one measurement", e.g., playfulness, usefulness, atypical are different dimensions of one measurement, right? If I am measuring three independent variables (e.g., self-efficacy, satisfaction, support) using three single dimension measurements, each with 5 to 10 items, and want to check their validity, should I do three separate CFA for them? Hope I made it clear...Thank you!!
Hello James, Thank you so much for these wonderful videos.
I have done CFA for my latent factors (5 indpendents and 1 dependent ) for a sample consists of 137 respondents.
My final model fit indices were (CMIN/DF 1.513, CFI .952, GFI .819, AGFI .772, NFI .873, IFI .953, TLI ,945, RMSEA .062). Do these indices indicate goodness of fit of my model or not?
Note: when i look at the modification indices it was fine and the big number which is 11.704 was belong to my latent dependent factor (not one of its variables).
Those look fine. You can always try the model fit plugin for auto-interpretation: th-cam.com/video/wV6UudZSBCA/w-d-xo.html
thank you James. please I have another question.
when I download your sheet which calculate Composite reability and AVE. after inserting my tables when I click the red button. it does not work and i got the following message: (unrecognized project language). What does it mean? and how can i make it work?
I've never seen that error. Make sure you are complying with the caveats and assumptions listed below the button. Or just use the plugin: th-cam.com/video/JqySgMU_qMQ/w-d-xo.html
My groups are different by sex so I started to delete the items with the largest deltas. Alas, every time I run the analysis I got the same result and I continued deleting items until all the items of a variable were all gone. At this moment it was clear that deleting items were not the way to reach invariant groups. Is there another method to reach invariant groups? In fact, I deleted a lot of items and the chi-square test remains significant.
Dang. That's unfortunate. To doublecheck the issue, you might try a levene's homogeneity of variance test (th-cam.com/video/E5VIKZU5kB8/w-d-xo.html) where gender is the factoring variable. If this also shows that the groups are variant across items, then you will just have to test your model without the grouping variable (i.e., no moderating by gender). If you absolutely must moderate by gender, then you'll have to list non-invariance as a limitation.
What should be the no. Of responses to perform this, if no. Of variables are 34.
There are many ways to calculate sample size. I have found that 50+5x is plenty. (where x is the number of variables) So, if x is 34, then 50+(5*34)
Respected Sir, while running structural model should we use measurement model's remaining item(after model fit ), or use complete items of questionnaire? Plz guid.
For the structural model, only use the items that remain after validating the factors. So, if an item was deleted during the measurement model validation, then it should remain deleted during structural model estimation.
My issue is I cannot run the analysis when I have Indices selected. It keeps telling me I have an error and I need to remove that selection. then, it will run. I am also having a not very appealing model fit even when I keep following the system. What does that mean? I have a sample size of 200.
Modification indices cannot be checked when you are also estimating means and intercepts (a check box in the estimation tab). So, uncheck that box unless you have missing data. If you have missing data, then you must estimate means and intercepts. If you cannot check modification indices, then look at standardized residual covariances. I think I have a tab for this in the stats tools package excel file on the homepage of statwiki.
Hi Dr. Gaskin, Will you please advise on a loading factor greater than 1? The EFA has shown that the loading value for one of the two items is greater than 1 but after running the EFA->CFA conversion the standardized loading is showing a value lesser than 1. Is the initial loading value>1 still the matter of concern?
This Heywood case can be fixed by switching to varimax rotation.
Great video Dr. James. cleared all the doubts i had, except one. In case when we have second order factors, do we perform metric invariance test prior to adding the second-order factors?, and if yes, is it ok to proceed with the structural model later and test the multi-group invariance with the composites including the second-order factor??
+Archana Mor I would recommend conducting invariance with the 2nd order factors modeled (as this is your final model), and not with the factor scores.
Thanks Dr. James
Hi James, thanks for your video's, they are the best! I have dumb question haha, someone told me that you don't need to include the dependent variable when doing CFA. Is that true? Furthermore, for my model fit; all values are excellent except for cmin/df (it is a little over 14, sample size is over 8500). What do you recommend concerning that? Thanks, keep up the good work! Cheers from Amsterdam
+Polyvore That is a common myth. However, DV must be included so as to confirm you do not have a tautological relationship between IVs and DV. Must establish discriminant validity before you can test causal effects. The only exception is when your DV is not a reflective latent factor. As for cmin/df, I am not surprised it is that high, given your sample size. The cmin is inflated by sample size. If you were to run the same model with a random sample of 300, intead of all 8500, you would probably find a suitable cmin/df.
Hi James, sorry for another question, but what about if you have three different DV's (e.g., acquisition satisfaction, consumption satisfaction, possession satisfaction) that you'd like to show have discriminant validity (they do, yay!), etc.. but you'd like to insert into a causal model one at a time (to tell a simplified story). Would it be OK to use all three in the CFA, and then just use one at a time for the causal model? Or do I need to run three separate CFAs? I'd prefer not to run three separate CFAs, as it seems to still important to show reliability and validity for all three DVs, collectively. Thank you!
The way you propose is fine, for the reasons you mention. Need to show they are different, but then test separately. However, reviewers may say that you are hiding suppressing effects (when similar variables are included together in the same model). So, I would recommend doing also a model with all three in them. Stick this in the appendix to show how the three work when altogether.
Great advice! Thank you so much.
Hello professor. Thank you for the video. If a DV is an observed variable, which is dollar amount, and all other IVs are latent variables, is it correct to just draw a rectangle in the model instead of ellipse? I did include this rectangle, observed variable, as a part of the model, but it seems that CFA does not include this factor when testing model fit. I am not sure whether it is right ways to do. Thanks!!
Yes. that is fine, but not in CFA. It would have to be in a causal model.
Oh, I should have not included the observed variable (DV) in the CFA. I will re-run the CFA and include it in the SEM analysis. Thank you for the reply!
Hello,
Hello,
"modifications indices" do not appear in my out put, I need these indices to realize the covariances between e
Modification indices only appear if you check the box for them in the output tab of the analysis properties window. You must also not have any missing data, and you cannot be estimating means and intercepts (another checkbox in the analysis properties window).
@@Gaskination thank you for your responsiveness, but the problem persists. The modification indices do not appear even if I have checked my data (no missing values) so I checked the box Estimate the means and the interceptions.
@@nouraettahir3912 Make sure to UNCHECK the 'estimate means and intercepts' box.
Hi James! What is your take on doing EFA and CFA on the same sample? Is it ok? Should i split the sample, one for each analyssi?
Doing it on a separate sample is considered more rigorous. However, most people don't have that luxury because data is often hard to obtain.
Thank you! Is there literature I can cite that suggest that using the same sample is less rigorous, though allowable?
@@renenob not that I'm aware of. It is a practice that most engage in because they don't have excess sample to draw from. So, unless the reviewers are asking for justification, I wouldn't bring attention to it. It is common practice.
Hi Dr. Gaskin, I am using AMOS 21. I tried testing common method bias using the method shared by you. While fully constrained model upon adding a CLF (using some alphabet like 'aa' on all paths rather than '0') I got the output but while running unconstrained model I didn't get any output and rather got following message: "The following covariance matrix is not positive definite
". How do I get through from here? Kindly comment.
Check the notes for the model in the output window. Usually there are some clues in there.
Dear Professor Gaskin, I checked that and it shows me the following note, which is not of much help. If you can throw some light on this problem I would really appreciate:
"The following covariance matrix is not positive definite
:
Amos can produce estimates of variances and covariances that yield covariance matrices that are not positive definite (Wothke, 1993). Such a solution is said to be inadmissible. Amos does not attempt to distinguish between a solution that is outside the admissible region and one that is on or near its boundary.
Wothke, W. (1993). Nonpositive definite matrices in structural modeling. In Bollen, K.A. & Long, J.S. [Eds.], Testing structural equation models (pp. 256-293). Newbury Park, CA: Sage.
For more, see the discussion of the message: "This solution is not admissible".
"
@@MrDhunna8 Does it say anything about a negative error variance? Usually that's the problem. You're welcome to email me your .amw file and data file and I can check it. You can find my email by googling me.
No professor Gaskin. It doesn't say specifically about a negative error variance. Rest I will email you the file and you can share your understanding over there. Thanks for your help!
Hi James, thank you so much for the SEM Series, it is so helpful and you should be proud of it:) I have installed SPSS and AMOS on my computer and went through your SEM Series from the first one til here, but currently I am getting an error whenever I run the model on AMOS. The error is "The model is probably unidentified. In order to achieve identifiable, it will probably be necessary to impose 1 additional constraint". Even though I am using SPSS v. 23, I couldn't make it work your plugin on my computer so I have the model manually. I wanted to mention this maybe I am missing step which I dont know. Thanks in advance, if you need further information pleaseee let me know. Best, Merve
This error occurs when you are missing a path constraint on a latent variable. Every latent variable needs to have at least one indicator path constrained to 1. As for the plugin not working, it might be because of one of the following:
1. You have variable labels instead of variable names in the pattern matrix
2. You use commas instead of decimals
3. You are using a Mac
Thanks James for the very informative videos. I just have a question, is it necessary to do EFA if I'm using pre-developed scales with established validity and jump right to CFA? Thank you again
I always do EFA because it helps to identify the discriminant validity issues. However, some would say that EFA is not necessary if the scales are already established.
I always do EFA because it helps to identify the discriminant validity issues. However, some would say that EFA is not necessary if the scales are already established.
James Gaskin thank you.
Hi James, just a question, what if we have hypothesized that the two multigroups are different and if the metric invariance result show a difference, do we still have to assess the difference between the two multigroups and delete the item with the biggest difference or it is only applicable, when you hypothesized that the two groups are same?
Usually a multigroup theory is about the causal relationships between variables, rather than the measurement of variables in latent factors. Metric invariance is about measurement differences, rather than causal differences.
Thanks for the reply James, kind of you and confirmation :)
Hello Where I can find AMOS EFA-CFA plugin on stat wiki page?
All of the plugins are here: drive.google.com/drive/folders/0B3T1TGdHG9aEbFg1eEpqOWtrR3c
Hello sir, i have one latent variable which has 5 indicator variable contains 25 items which means single indicator consist of 5 items, like for second latent variable but my third latent variable which does not have indicators rather contains 10 items , i am now in confused state how to peform CFA,
the words reflective, formative, first order and second order constructs , i am unsure pls explain
You can do the EFA by just correlating all of the highest order factors. In some cases, this will be first order, but in others, it will be second order. Notice in this video how I do it with 2nd order: th-cam.com/video/HBQPqj63Y7s/w-d-xo.html
@@Gaskination I have 15 items in one latent construct, but the hair book , he suggests that more items are not necessarily better, he recommends per construct at least three or four items is suffice. My question is how to perform CFA when we have 15 items in once construct.
My another question, how this is possible to choose a specific construct which has only 3 constructs.
What if our model demands to choose more items in one construct.
How to perform CFA when we have multidimensional measure. suppose service quality has five dimensions like empathy, reliability, responsiveness .
Each dimension has 5 items , which means 15 measured items.
Another questions
suppose if we use multidimensional measures how to check construct validity.
construct validity for Service Quality with sub dimensions or subdimensons with measures. pls explain
@@hemakumarm9662 I would recommend you check the dimensionality of your factor by conducting an EFA first. This may show that your 15 items actually factor into multiple dimensions. Then you can model this as a second order factor in your CFA. Here is a video on second order factors in AMOS: th-cam.com/video/HBQPqj63Y7s/w-d-xo.html. If modeling the 2nd order factor reflectively, then the same validation criteria applies as for first order reflective factors. If modeling formatively, then you'll need to use SmartPLS (or some other PLS tool) because AMOS doesn't do formative factors.
Thanks for the CFA videos. They're really helpful. I've been looking around and I can't seem to find one thing though. Is there any way in AMOS to force the correlation to be 1 so I can compare a model where I'm forcing two variables to be perfectly correlated to another model where I'm letting it be free?
Yes. In the model, doubleclick on the covariance arrow. This will open the object properties window. Go to the Parameters tab. Then set the Covariance to 1.
Oops, I meant to say covariance. Thanks for the input. However, I'm trying to force the covariance to be 1 for the standardized estimates. Doing what you suggested just makes it 1 for the unstandardized. Do you know how to force the covariance to be 1 for the standardized estimates? I want to compare two models where one has two unobserved variables and where one makes these two unobserved variables to be the same thing by forcing the covariance to be 1. I would just have all the observed variables have paths to one unobserved variable, but I need the degrees of freedom to equal and this would nullify that. Thanks
It is the same. The unstandardized value will be 1, which is constrained. That means the correlation is also constrained (although the value will differ from 1).
Dear Prof. Gaskin, I see that in your website you had moved all the plugins into a single dropbox folder. However, I can's seem to be able to locate the EFA-CFA plugin within that folder. Your help would be highly appreciated. Thank you very much in advance
It is in the 23 or 24 folder. It is called PatternMatrixBuilder.dll
Wonderful. Thank you very very much for this information, and of course for the outstanding videos.
Hi James, I am using Windows 10. Is there a plug in available? I am pretty sure I did put the plugins to right place but they are not showing up when I open Amos...
Make sure you have the correct plugin (version 23 or lower, or version 24+). Also, make sure you have checked the box for 'unblock' in the file properties (if it is an option). Also make sure you are running amos as administrator (right click amos graphics exe file and select 'run as administrator'). It should work if all these things are done. If not, then it may be some other issue I haven't discovered yet.
Hi James,
you are my hero! One question, i cannot find AMOS EFA CFA Plug in in Statwiki. why? I just have Plugins & Estimands. Thank you!
Yes, go to plugins and estimands, and then pick the version you use (23 or lower, 24 or higher) and then you can download the files.
Mr Gaskin, first of all, thank you for the helpul videos. I would like to ak you where can i see the residual variances of the variables in the AMOS output?
Thank you so much for your time!
Sorry for the delayed response. TH-cam classified this as spam for some reason... The residual variances are in the variances table in the estimates-->scalars section of the output.
Thank you so much!
Hi Dr Gaskin
Is PClose value 1 good or not?
Yes. That is considered "perfect".
Dear james i want to know that is it necessary to check discriminant validity among first order factors while applying second order CFA?
Hi, I have a question. I found that my model is not invariant. However, do I need to fix not invariant problem if my research want to test the moderated effect of gender? Thank you :)
If you want to do a multigroup test during the structural model, then you need to meet the assumption of measurement invariance during the measurement model. If you cannot meet full invariance, try for partial invariance (at least 1 indicator path invariant per factor).
I see. Thank you so much.
Hallo James,
thank you for your explanations! I have a question regarding the pattern matrix builder plugin. I have amosv24 trial Version and the plugin folder looks Different from yours. where do i have to install the plugin to make it work?
Can we do Invariance testing on more than two groups in CFA?
You can do it in pairs. The human mind is not good at comparing more than two things at once. Similarly, our algorithms are best at comparing in pairs. So, if you have three groups, you can do A:B, A:C, B:C or A:BC, B:AC, C:AB.
@@Gaskination Thanks for replying...
Hi Prof. James, thank you so much for the SEM Series, it is so helpful!
I have a question. You show how to do Configural Invariance & Metric Invariance. Then, how can we do Scalar Invariance? Thank you very much!
Scalar invariance involves constraining the intercepts. I don't have a video on it, but the procedure is the same as with constraining regression weights.
@@Gaskination Thanks, James!
Can I use PCA-derived components in CFA or SE Modeling?
Yes.
James Gaskin , even if my components are formative, and not reflective?
but not as a measurement model, right? Just the total variable (construct) scores.
@@renenob Yes. Here is a video about it: th-cam.com/video/LRND-H-hQQw/w-d-xo.html (look around the 12 minute mark)
Hi James! When I checked for metric invariances I obtained a Pvalue = 1.00. Suspicious, I found the difference between std regression weights and found a difference of .009. So this means your tool package rounds up/down? And if so, is it more accurate to report my p value as .991? (assuming i did this correctly)Thank you.
If you are finding differences in regression weights between groups, this implies it is set up correctly and there is not likely an error (unless you accidentally reported the wrong chi-square or df). The chi-square difference test in the stats tools package does round to three decimal places. You can always extend that in Excel to see what the actual value is.
Thanks, got it. Is it expected of me to just report the P as 1.00 even though the difference is .00899.. or subtract that number from 1 and report that?
You can report both. They are not related as a function.
Where Can i get these data files for practice ??
On the homepage of the StatWiki: statwiki.gaskination.com/
Dear Sir,
I need topic and questionnaire for the spss file which is used in the videos of SEM. How can I get it sir?
All the datasets are on Statwiki homepage. The topic can be inferred and the questionnaire is embedded as labels in the SPSS file.
Hello sir, ur videos are really good! Do u have any source for interpreting factor loading of second order confirmatory factor analysis. I mean what does high factor loading means?
You can interpret them the same way as for first order. Same measures (e.g., CR AVE etc.) and same thresholds.
James Gaskin is high factor loading stands for high correlation?
yes. That is essentially the interpretation. It is the correlation with the other items in that factor.
thank u so much sir
Can you please share with us the relevant word file having the checklist (which you use in the videos)
The checklist is just on the StatWiki: statwiki.kolobkreations.com in the general guidelines section.
James Gaskin Thanks
Hello Dr. Gaskin, the plug-ins and estimands that I've pasted in spss 24 folder arenot showing up on the menu
Make sure you stick them in the correct folder: th-cam.com/video/nf6fzpmnpDc/w-d-xo.html
Dear Sir, when we check model fit if we get p value less than 0.05 we do not say it is not a good model fit. because p value is affected by the sample size. Is there any literature supporting this point? Can you please comment about this?
The p-value for the chi-square is a test for whether the proposed model (your model) is different from the observed model (the model inherent in the data and variable relationships). So, a significant (
This is great!!! I am learning so much from your channel.
Dear Prof Gaskin, when I run my CFA, the model fit indices are adequate however, the covariances between the factors is negative. I have tried to read about this but could not get any material on this. Pls advise if I can go ahead with the next steps of CFA or is my model inappropriate. Thanks!
Negative covariance is totally fine. It just means there is an inverse relationship between the two factors. For example, if I have a factor for job satisfaction and a factor for burnout, these would have an inverse relationship because as burnout increases, job satisfaction decreases.
Thanks SO much Professor. Your videos and insights have been of immense help. :)
Thanks a lot for this! And how do you deal with missing variables?
AMOS requires you either impute the missing values (like with the mean or median value), or delete those rows. This is only a requirement if you need to do modification indices or bootstrap.
Thank you very much :)
What is the cut-off value for covariances.
there is no cutoff value.
@@Gaskination some one told me that it should be above 0.5. and you have mentioned in one your Amos videos that covariances should be below 0.8.
@@nasirislam2390 Yes, there are different thresholds. Above .5 makes sense for constructs that should be related. Less than 0.8 makes sense if we want them to be discriminant.
Gratitude sir. Sir I used multigroup analysis plugin. However I am getting difficulties in interpretation. Do you have any research articles used MGA.
Thanks
@@nasirislam2390 Here are some: statwiki.kolobkreations.com/index.php?title=References#Moderation_and_Multigroup
Hello Sir, I think this plugin is removed from the website. can you please check.
Thank you for the awesome videos.
I've changed the link to include all plugins and estimands. So, just left-click on the link for plugins and estimands, and then it will take you to a folder that contains all of them.
How to fix the problem the issue: "The following covariance matrix is not positive definite (Group number 1 - Default model)"
Fazal Haleem This usually occurs when you have negative error variance or some other oddity like that. Check the other notes for the model for ideas.
Thank you so much
@@Gaskination please can you send me this data
Thank you for video
@@marwazakarya1597 The data is available on the homepage of statwiki: statwiki.kolobkreations.com/
Hello Sir,
Thank you for your videos and guidance.
I am a PhD candidate and I seek your help to answer my following questions:
1. Is the MI between uniqueness and factor considered as an issue in further analysis? If yes how it can be solved?
2. The GFI value for my model is 0.857, is it acceptable or it should be more than 0.9?
FYI ( My model contains 62 items, 13 constructs and the sample size is 418 respondents)
Thank you so much.
1. Not sure what you mean. However, generally speaking, we seek to address modification indices by removing and adding paths and variables, rather than by covarying errors with factors (not sure if that answers your question, since I wasn't sure what your question was).
2. Check instead the CFI, RMSEA, and SRMR. GFI is a bit old and can be biased by sample size and model complexity.
When I try to create the diagram using the Plugin, I receive an error message saying 'There are no observed variables in the model. That is, none of the variable names in the model specification could be found in the data file.'
I was working alongside your SEM Series so far, I do not understand what to make of this error message. The variables have names and labels in SPSS, I made sure for SPSS to output names, not labels. Am I missing something, or will it just not work for me?
That is unusual. The only other reason that would happen is if you linked the wrong data file in AMOS. If that is not the reason either, then I'm not sure...
Hi ElEscobar, did you find the solution? I have the same problem 'There are no observed variables in the model. That is, none of the variable names in the model specification could be found in the data file.' when developing the pattern matrix. Interestingly, I have the same error message when using the plugin for version 24 (or higher) or version 26 (or higher). I tried everything: renaming variables, I adjusted SPSS to names and lables, I reinstalled/matched different softwares: SPSS24/AMOS24, SPSS26/AMOS26 --> nothing works. I tried to run previously developed pattern matrices (SPSS24/AMOS24) which worked fine. Now: Not working anymore. I tried to develop same pattern matrix as before. The only thing I receive is this specific error message. I kindly would like to touch base with you if you found a solution. There is also no specfic solution published somewhere on the web. Thank you very much for your help. Best wishes,
hello sir , you are damn good ,
Thanks!
Hi sir I have got few issues my sample size is 4000 and I have 5 factors with 10 or 11 variables. My cmin/df is above 60 I have tried Mod Indcs but its not going down and there are several Mod Indcs. The EFA ran perfectly with no disc. val issues. Kindly suggest me some way. regards
Sample size inflates the chi-square. Either take a random sample of 500 or ignore the chi-square measures.
ok sir. Plz do tell... what about the mod indices? there are several.plz direct me to any videos regarding managing of large samples....thank u again. regards
There will be so many modification indices because the chi-square is inflated (and MIs are a function of chi-square). So, again, I would recommend either taking a random sample of 500, or just ignoring any metrics based on the chi-square. Instead look at the RMSEA, CFI, and SRMR.
May I know where to download the chi-square different excel file?
It's on the homepage of the StatWiki: statwiki.kolobkreations.com
hi james, first of all thank you for your videos! i have a problem trying to run my model in amos, it only says: scanning an then reading data and never runs my model, what can i do?
Weird. Check the output to see if there are any notes for it. If the output is also not available, then you might check the EFA again. Make sure you have also checked data screening issues.
Hello Sir, Thank you for great videos with explanations. I am doing my thesis for Master's Degree and I have a model which has single item construct, measures with Five-Point likert scale as the DV. So do I need to add it when I do EFA and CFA?? Can I do EFA and CFA only for other independent variables??
Because I added it when I do EFA but there were no values in pattern matrix. So I think I did wrong. Please help me with this issue. :)
Gauri Prabhani it does not belong in the EFA or CFA. You should only include latent constructs in those analyses. Because this has only one indicator, it is observed rather than latent. You can bring it in just during the causal model or structural analysis.
James Gaskin Thank you sir. Got it :)
I have 7 factors and for the last factor there are two items ...when i am clicking calculate estimates ..standardized estimate not appearing for last factor which has two items...
If standardized regression weights come more than 1 for any item...is it problematic?
that is unexpected. If estimates are produced for any of the items, then they should be produced for all. As for standardized estimates greater than 1.00, yes, that is a problem. Here is a video about it: th-cam.com/video/Vx24KFf-rAo/w-d-xo.html
Thanks for your quick reply.....:)
Hi James thanks for your reply....the second problem is solved through the video...but for the last two items unstandardized estimates are co mung but not standardize estimates are not appearing
I'm not sure what could cause that, if standardized estimates box is checked in the analysis properties window.
Thank you James:)
hi again sir ..what is run time error 91? I am receiving it every time i am running the excel sheet...
Make sure to comply with the caveats and assumptions listed in the box below the button. If you are not violating one of these, also make sure you have covaried all latent variables with all other latent variables. If this is not the problem, then perhaps you have a "latent" variable that only has one item - which means it is not really latent and should not be in the CFA.
can someone explain to me what it means when PCLOSE =.000; thank you
It means that there is a 0% chance that the RMSEA is less than the desired 0.05 level.
hello sir, i hope you are fine.
1) can we do EFA and CFA on the same data of sample ? 2) or we divide our data half on EFA half on CFA. 3) or after EFA we collect new data from same population for CFA?
1. Yes, although it is considered more rigorous to do it on separate samples if you have sufficient data. Most do not have sufficient data, so they use the same data for both.
2. Only if you have enough data (usually more than 400).
3. If possible, that would be awesome.
Dear James, first thank you soo much for these wonderful videos. I have done CFA and my all model fit indicators are fine except Standardized RMR which is 0.0942. How can i keep it below 0.08 threshold level? My sample size is 400. Waiting for your reply.
Best regards
Syed
You can try looking at the standardized residual covariances to see if there is an item that is causing most of the error. I think I show how to do that in this one: th-cam.com/video/JkZGWUUjdLg/w-d-xo.html
@@Gaskination Dear Sir.. Thank you so much for the wonderful videos and comments/answers. I have one question. Is it better to drop the item or covariate the errors? (REFERENCE/SOURCE PLEASE).
I have read somewhere that covariating the error terms can only be allowed in longitudinal studies. What is better option?
Secondly, in the threshold for modification indices you have entered 20. Can we enter 15 or 10.....?.(REFERENCE/SOURCE PLEASE).
@@drmuhammadwahab I'm terrible with citations... It is better to drop an item if you have multiple superfluous reflective items (they are intentionally redundant). However, if removing the item would lead to having 3 or fewer items, it may be better to covary the error term. As for modification indices, the size of these is based on the chi-square size. The chi-square size is based on model complexity. So, for a complex model, it doesn't make any sense to keep the original threshold of 4. For a simple model, it doesn't' make any sense to set the threshold so high.
@@Gaskination Thank you so much for your reply :). Yes, I have 6 items loaded on the Factor. THANK YOU.
Respected sir,
Can u please tell me how the value of Chi-square is calculated in CFA? Can u explain with some simple example? I will be very grateful to you...
It is like subtracting the predicted covariance matrix (representing the proposed model) from the obsereved covariance matrix (natural to the data). The bigger the difference, the larger the chi-square. Differences come from unestimated parameters (relationships).
Sir, can you share the data file of these 6 factors?
The data for the SEM series playlist is available on the homepage of statwiki
Thanks so much my Dear...
i have 13 variables, can't help them :(((
hello sir, can we correlate factors obtained from two different sample sizes in Amos. if yes, how?
AMOS can only handle one set of variables per model. So, even if you have different samples, they must have the same variables.
Dear Sir,
Sorry to disturb you again. Can you please help me find article reference for cutoff of cmin/ df (Chi-sqaure/ df). I used (Tabachnick & Fidell, 2007, p. 715) but it have
I have a references section on StatWiki: statwiki.kolobkreations.com/index.php?title=References as for a reference for the cutoff, most model fit references come from: Hu and Bentler (1999)
James Gaskin sir, I had checked the reference section. Read the Hu and Bentler paper. But the cutoff provided by you are different
Mann von Ehre
Hi James, thanks for your sharing. I am using amos to do CFA and I encounter some problems.I connected the factors but there were no statistical estimates for the factors except 1. This is the image I created: ibb.co/m7ofZ8 it would be much appreciated if you could answer my question. Thank you.
covary your latent variables
it stayed the same even after i covered the latent variables ;( anyways thanks so much