what is the process would be in pilot testing a survey questionnaire with 4 factors. and we want them to have equal in items. we pilot tested the factors with mostly 20+ items
Actually, corrected item-total correlation as you show is an indicator of discriminatory power, i.e. it shows how well the item differentiates those scoring high from those scoring low. Discriminatory power is a function of reliability rather than validity. To show that an instrument is valid we should run a more complex procedure like Factor Analysis.
Thank you so much sir for your comment. You are right sir. Correlation power is not he most appropriate procedure for item validation but using correlation power procedures for questionnaire validity can provide insights into the strength of relationships between items or between the questionnaire and other established measures. While correlation coefficients can indicate how well items relate to each other or to a construct, they do not directly measure validity.Correlation power helps determine the likelihood of detecting true correlations, which is useful in hypothesis testing. However, sir i will say that it has limitations such as it should be interpreted cautiously, as high correlation does not guarantee validity due to potential shared measurement errors or biases. You are right, for a comprehensive validation, combine correlation analysis with other methods like factor analysis and construct validity assessments is more appropriate..
Furthermore sir, factor analysis is a complex procedure and its has some requirements which may not be fulfilled by every questionnaire like for factor analysis variables should be correlated; low correlation suggests factor analysis may not be appropriate. The Kaiser-Meyer-Olkin (KMO) measure should be above 0.5, and Bartlett's test should yield a p-value less than 0.05. additionally for factor analysis someone need to ensure that variables are measurable on an interval or ratio scale, and assess the data for multicollinearity and communalities.
Sir i have a diferent type of questionaire where i have grouped these factors and each factors consist of several variable.how can i conduct validity test on a study like this
To conduct a validity test for your questionnaire with grouped factors and variables, you can consider several types of validity i.e. content validity by expert professional or criterion validity through correlation. You can calculate validity for each factor or as a whole.
Dear @lixuanlai, when data is not normally distributed, Spearman's correlation is more appropriate than Pearson correlation for assessing the validity of a questionnaire. Unlike Pearson, Spearman's correlation does not rely on the assumption of normal distribution, making it suitable for non-normally distributed data. This non-parametric test is robust for ordinal data and provides insights into the monotonic relationship between variables, making it a better choice when normality assumptions are not met. Hope it will be cleared now. You can find details about this issue from the following citations. [1] Spearman's Correlation Explained - Statistics By Jim statisticsbyjim.com/basics/spearmans-correlation/ [2] Pearson vs Spearman correlations: practical applications - SurveyMonkey www.surveymonkey.com/market-research/resources/pearson-correlation-vs-spearman-correlation/ [3] Correlation (Pearson, Kendall, Spearman) - Statistics Solutions www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/correlation-pearson-kendall-spearman/ [4] Which correlation coefficient is better to use: Spearman or Pearson? www.researchgate.net/post/Which-correlation-coefficient-is-better-to-use-Spearman-or-Pearson/2 [5] The reliability of a two-item scale: Pearson, Cronbach, or Spearman ... www.researchgate.net/publication/232610246_The_reliability_of_a_two-item_scale_Pearson_Cronbach_or_Spearman-Brown
@@bilalmalakandipt thank you Sir for your clarification. May I know construct validity via correlation shall be conducted after the actual survey or pilot survey?
@@lixuanlai A pilot study is often sufficient for validating a questionnaire and assessing its feasibility, while a full survey is recommended when the goal is to generalize the findings to a larger population, the questionnaire is complex, the target population is hard to reach, or resources are available. The decision should be based on the specific research objectives and constraints of the study.
@@lixuanlai but the procedure for construct validity is different from the above procedure discussed in the video. To test the construct validity of a questionnaire using SPSS, you can perform factor analysis. Here are the steps: 1. In SPSS, go to Analyze --> Dimension Reduction --> Factor. 2. Transfer all the items related to the construct you want to test into the Variables box. For example, if you have 5 items measuring "life satisfaction", move those 5 variables to the right side. 3. Click on the Descriptives button and select KMO and Bartlett's test. This will show if your data is suitable for factor analysis. 4. Click on the Extraction button and choose the method, such as Principal Component Analysis. Set the number of factors to extract, for example 2 factors. 5. Click on the Rotation button and select a rotation method, such as Varimax. This will make the factors more interpretable. 6. Click OK to run the factor analysis. The results will show: • KMO measure of sampling adequacy - should be above 0.5 for good factor analysis. • Bartlett's test of sphericity - should be significant (p0.5) on only one factor, showing convergent and discriminant validity. If the results show that the items load as expected on the factors, with high loadings on the intended factor and low loadings on the other factors, then you can conclude that the questionnaire has good construct validity. Additionally, you can calculate the average variance extracted (AVE) for each factor. AVE should be above 0.5 for good convergent validity. You can also check the correlation between factors - if the correlations are low, it indicates good discriminant validity.
Helo sir. Thank you for the good teachings. Now i have a question. Why you have added those variables, what is the meaninh? Can we add variables that we are not sure if they belong to same construct? Or can i perform factor analysis first for the purpose of identifying factors that can be added together? What is your advice on which method is better between construct validity and the one that you have demonstrated and when to use it
For within-group analysis, when data is gathered three or more than three times, we can use repeated measure ANOVA if the following assumptions are met otherwise we will use The Friedman test. 1. The data is normally distributed 2. The data is collected more than three times from dependent or the same group 3. There are no outliers 4. The dependent variable is continuous and the independent variable is categorical having three or more levels 5. there is no violation of sphericity (constant variance across time points)
For between group analysis when the number of groups are three or more than three and when they are independent groups, then we can use ANOVA (extension of independent t-test). There are three primary assumptions in ANOVA: 1. The responses for each factor level have a normal population distribution. 2. These distributions have the same variance. 3. The data are independent. If the data is not normally distributed in the above case, then the non-parametric test for ANOVA is the Kruskal-Wallis Test which is the extension of the Mann-Whitney U test.
Cronbach's alpha is primarily used to assess the reliability or internal consistency of a set of scale or test items in questionnaires. It evaluates how closely related a group of items are as a measure of a single construct, indicating whether they consistently reflect the same underlying concept. A higher alpha value (generally above 0.7) suggests that the items are measuring the same thing reliably. It is not a validity test..
what is the process would be in pilot testing a survey questionnaire with 4 factors. and we want them to have equal in items. we pilot tested the factors with mostly 20+ items
And how will we write it validity results into our research work
how do i get the person table?
Actually, corrected item-total correlation as you show is an indicator of discriminatory power, i.e. it shows how well the item differentiates those scoring high from those scoring low. Discriminatory power is a function of reliability rather than validity. To show that an instrument is valid we should run a more complex procedure like Factor Analysis.
Thank you so much sir for your comment. You are right sir. Correlation power is not he most appropriate procedure for item validation but using correlation power procedures for questionnaire validity can provide insights into the strength of relationships between items or between the questionnaire and other established measures. While correlation coefficients can indicate how well items relate to each other or to a construct, they do not directly measure validity.Correlation power helps determine the likelihood of detecting true correlations, which is useful in hypothesis testing. However, sir i will say that it has limitations such as it should be interpreted cautiously, as high correlation does not guarantee validity due to potential shared measurement errors or biases. You are right, for a comprehensive validation, combine correlation analysis with other methods like factor analysis and construct validity assessments is more appropriate..
Furthermore sir, factor analysis is a complex procedure and its has some requirements which may not be fulfilled by every questionnaire like for factor analysis variables should be correlated; low correlation suggests factor analysis may not be appropriate. The Kaiser-Meyer-Olkin (KMO) measure should be above 0.5, and Bartlett's test should yield a p-value less than 0.05. additionally for factor analysis someone need to ensure that variables are measurable on an interval or ratio scale, and assess the data for multicollinearity and communalities.
Sir i have a diferent type of questionaire where i have grouped these factors and each factors consist of several variable.how can i conduct validity test on a study like this
To conduct a validity test for your questionnaire with grouped factors and variables, you can consider several types of validity i.e. content validity by expert professional or criterion validity through correlation. You can calculate validity for each factor or as a whole.
Sir, may I know if my data is not normally distributed, shall I use Spearman Correlation or Pearson Correlation for the questionnaire validity?
Dear @lixuanlai, when data is not normally distributed, Spearman's correlation is more appropriate than Pearson correlation for assessing the validity of a questionnaire. Unlike Pearson, Spearman's correlation does not rely on the assumption of normal distribution, making it suitable for non-normally distributed data. This non-parametric test is robust for ordinal data and provides insights into the monotonic relationship between variables, making it a better choice when normality assumptions are not met. Hope it will be cleared now. You can find details about this issue from the following citations.
[1] Spearman's Correlation Explained - Statistics By Jim statisticsbyjim.com/basics/spearmans-correlation/
[2] Pearson vs Spearman correlations: practical applications - SurveyMonkey www.surveymonkey.com/market-research/resources/pearson-correlation-vs-spearman-correlation/
[3] Correlation (Pearson, Kendall, Spearman) - Statistics Solutions www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/correlation-pearson-kendall-spearman/
[4] Which correlation coefficient is better to use: Spearman or Pearson? www.researchgate.net/post/Which-correlation-coefficient-is-better-to-use-Spearman-or-Pearson/2
[5] The reliability of a two-item scale: Pearson, Cronbach, or Spearman ... www.researchgate.net/publication/232610246_The_reliability_of_a_two-item_scale_Pearson_Cronbach_or_Spearman-Brown
@@bilalmalakandipt thank you Sir for your clarification. May I know construct validity via correlation shall be conducted after the actual survey or pilot survey?
@@lixuanlai A pilot study is often sufficient for validating a questionnaire and assessing its feasibility, while a full survey is recommended when the goal is to generalize the findings to a larger population, the questionnaire is complex, the target population is hard to reach, or resources are available. The decision should be based on the specific research objectives and constraints of the study.
@@lixuanlai but the procedure for construct validity is different from the above procedure discussed in the video. To test the construct validity of a questionnaire using SPSS, you can perform factor analysis. Here are the steps:
1. In SPSS, go to Analyze --> Dimension Reduction --> Factor.
2. Transfer all the items related to the construct you want to test into the Variables box. For example, if you have 5 items measuring "life satisfaction", move those 5 variables to the right side.
3. Click on the Descriptives button and select KMO and Bartlett's test. This will show if your data is suitable for factor analysis.
4. Click on the Extraction button and choose the method, such as Principal Component Analysis. Set the number of factors to extract, for example 2 factors.
5. Click on the Rotation button and select a rotation method, such as Varimax. This will make the factors more interpretable.
6. Click OK to run the factor analysis.
The results will show:
• KMO measure of sampling adequacy - should be above 0.5 for good factor analysis.
• Bartlett's test of sphericity - should be significant (p0.5) on only one factor, showing convergent and discriminant validity.
If the results show that the items load as expected on the factors, with high loadings on the intended factor and low loadings on the other factors, then you can conclude that the questionnaire has good construct validity.
Additionally, you can calculate the average variance extracted (AVE) for each factor. AVE should be above 0.5 for good convergent validity. You can also check the correlation between factors - if the correlations are low, it indicates good discriminant validity.
Helo sir. Thank you for the good teachings.
Now i have a question.
Why you have added those variables, what is the meaninh? Can we add variables that we are not sure if they belong to same construct? Or can i perform factor analysis first for the purpose of identifying factors that can be added together? What is your advice on which method is better between construct validity and the one that you have demonstrated and when to use it
Sir how to download ibm spss
I can share setup with you .
Sir, can u explain how to analyse a variable between 4 groups.
For within-group analysis, when data is gathered three or more than three times, we can use repeated measure ANOVA if the following assumptions are met otherwise we will use The Friedman test.
1. The data is normally distributed
2. The data is collected more than three times from dependent or the same group
3. There are no outliers
4. The dependent variable is continuous and the independent variable is categorical having three or more levels
5. there is no violation of sphericity (constant variance across time points)
@@bilalmalakandipt thank you sir
For between group analysis when the number of groups are three or more than three and when they are independent groups, then we can use ANOVA (extension of independent t-test).
There are three primary assumptions in ANOVA:
1. The responses for each factor level have a normal population distribution.
2. These distributions have the same variance.
3. The data are independent.
If the data is not normally distributed in the above case, then the non-parametric test for ANOVA is the Kruskal-Wallis Test which is the extension of the Mann-Whitney U test.
What if cronbach's alpha is 0.98 is it valid
Cronbach's alpha is primarily used to assess the reliability or internal consistency of a set of scale or test items in questionnaires. It evaluates how closely related a group of items are as a measure of a single construct, indicating whether they consistently reflect the same underlying concept. A higher alpha value (generally above 0.7) suggests that the items are measuring the same thing reliably. It is not a validity test..