Another fantastic tutorial here. These are the basics of what all IEs and QEs should be using to analyze all their key process data on the shop floor to continuously improve. thanks so much for all your hard work on this
Thank you very much Jess, it's our pleasure to help others learn and grow. Please feel free to let us know what other subjects interest you so we may create additional videos! Best of luck and happy holidays!
Summary: 1. Module Overview: Module 4 of "Quality Control Part 2" focuses on analyzing data using Minitab for preventive measures and understanding process behavior.
2. Control Charts: The module introduces control charts to monitor processes on an ongoing basis, enabling adjustments to maintain process stability.
3. X-Bar and R Charts: The session demonstrates creating an X-Bar and R chart using Minitab to analyze process means and ranges, allowing for effective monitoring and adjustment.
4. Control Limit Adjustment: The instructor explains how to adjust control limits based on desired specifications, emphasizing the importance of recognizing process shifts and variations.
5. Pareto Chart and Distribution Identification: The module covers using Pareto charts to identify main issues and determining data distribution using Minitab for effective process performance evaluation and continuous improvement.
Thankyou for making these modules. They really enhance my knowledge for using minitab. I worked on pareto charts, box plots and histograms in excel through pivot tables but this seems very convenient in minitab. Will it be possible for you to do a video on Taguchi's DOE in minitab? please
Hi AA, thank you for your kind comments, Taguchi's DOE has been a highly requested video and we would be more than happy to help you guys out its not that different from the factorial and response surface DOE's, stay tuned! Sincerely, QE NPI Andres R.
This series is great.I’m curious how would you handle a situation when none of the distributions are a good fit for the data? For example all distribution fits have a P value less than 0.05?
Hi! This is a very common question actually. Usually the approach you take depends on what you want to use the data for and how it was acquired. Case 1: You are executing an exploratory, low risk study. In this case look at your sample size and your normality plots. Why is the data not normal? Is it because there is one major outlier? Or was the data taken from multiple days/machines? For example if we want to make a decision regarding the average salaries in a certain zip code and we notice mostly everyone earns around 50K USD per year but there are one or two millionaires with 500K per year salaries then we should be fine eliminating these two data points and running our normality test again. Most of the time the data will show normality after cleaning outliers. Case 2: You are executing a validation study in a regulated industry. In this case a rationale must be provided with a statement similar to the following: "Samples tested have an average result of XX with a standard deviation of XX. Normality tests show the data is not normal and individual distribution identification is unable to determine an ideal fit distribution (All P-values below 0.05). However the process tolerance is USL 20 and LSL 5. As shown in the following histogram (Insert data histogram) the data is perfectly centered/not in contact with the USL and LSL with an average of 12.5 and a standard deviation of +-1 units. The complete data range falls within 9 and 15 lbs/f. The risk of creating non-conforming parts is 0%. Since sample size exceeds 30 samples we shall proceed to analyze it as normal data. (Option 2). since visual review of the data shows only 1 outlier sample a normality test shall be executed removing the outlier. After outlier removal normality plot shows the data is normal (Include new normality plot). Since data meets normality but we can't remove the outlier point we shall proceed to analyze the Ppk/Cpk requirement using the full data set but assuming normality." This rationale can be signed off by the staff quality team and manufacturing team to let the validation proceed to the next stage. If creating a rationale is not possible/allowed (should be but some sites are like that) then the samples must be taken again but with greater care. Make sure you aren't combining different shifts, raw materials or machines. Also make sure that the machine uses the same parameters, since this is a validation if you need to measure/check your raw material as it comes in request a quality inspector. Data is normal most of the time but when you mix and match machines/operators/lots you will get non-normal results. There is no mathematical/statistical way to fix/explain this other than a rationale. We hope this answers your questions, for a more detailed explanation please email us to info@cusum.mx Sincerely, QE NPI Andres R.
Hi, you have shown examples of variable data. Is it possible to do a video about analyzing attribute data (sorry if I am using the wrong terminology) i.e. defects per unit/item manufactured. How do we tell if the defects are well within the CPK/PPK limits or in other words, the fallout rate is statistically expected and it is not due to any out-of-control process.
Hi Jack! Thank you for your questions, you can consider defects per unit as a binomial distribution and analyze it as a 2 proportions test (Original lot vs current lot) this will only work for historical analysis. If you want to control the process during manufacturing I would recommend using a control chart to plot the amount of defects please use: Stat > Control Charts > Attribute Charts > P Chart. with this you can monitor defects by using pass/fail criteria (use 0 for pass and 1 for fail), if you want to count the number of defects per lot instead of doing it on a simple pass fail criteria you can use the C Chart. Please let me know if you have any additional questions, Sincerely, QE NPI Andres R.
Hi, thanks for your explanation. I got a question: so is the logistic distribution better than the normal distribution in your example? How should we choose the most properate distribution? Hope you can see my question and reply. Thanks again.
Hi Tengguang, as long as you get a P value greater than 0.05 on the normality test you can use the normal distribution for your data set, if it is non-normal than you can use the individual data distribution identification tool to determine what distribution best matches the data. Now in theory even if the normality test passes (Greater than 0.05) but there is a distribution that fits the data better with a greater P-value than the normal data distribution then you can also use the alternate non-normal data distribution (it simply has a better fit than the normal distribution). In summary all distributions with P-values greater than 0.05 are acceptable approximations of the data with the greatest P-value having the best fit. I hope this helps clear up any confusion, Best of luck and take care! Sincerely, QE NPI Andres R.
hola oyes las empresa hoy en día no quieren tener minitab porque las licencias son caras ademas se requiere renovar cada año, hay ejemplos en Excel? me seria mas útil. gracias
Buenas tardes Miguel! Claro que si se puede realizar todo esto y más en Excel! Con mucho gusto compartimos unos videos usando Excel para el control de Calidad. ¿Los temas de este modulo son los más útiles para ti? También lo podemos hacer para los otros módulos que ya se han publicado o con gusto podemos tomar tus sugerencias y crear más temas aun! Saludos y muchas gracias por subscribirte al canal!
I am from India and only 80 years old. Your presentation is excellent. Will it be possible for you to throw some light on Control charts for service industries? I am sending a separate mail to info@cusum.mix.Regards, Ramesh Parkhi
Good evening Ramesh we would be more than happy to assist you and welcome all suggestions. Please note that no email has been received yet, please confirm the email address (Its info@cusum.mx, there appears to be a typo addressed at .mix) Best of luck! Sincerely, QE NPI Andres R.
Another fantastic tutorial here. These are the basics of what all IEs and QEs should be using to analyze all their key process data on the shop floor to continuously improve. thanks so much for all your hard work on this
Minute 7:38: how do you know that we have three standard deviations toward the right and three to the left. How have you determine the three?
Great explanation!
Very nice explanation
Thank you very much Jess, it's our pleasure to help others learn and grow. Please feel free to let us know what other subjects interest you so we may create additional videos! Best of luck and happy holidays!
Why have we chosen 3 sdt dev toward the top and 3 std dev toward the botoom?
Summary:
1. Module Overview: Module 4 of "Quality Control Part 2" focuses on analyzing data using Minitab for preventive measures and understanding process behavior.
2. Control Charts: The module introduces control charts to monitor processes on an ongoing basis, enabling adjustments to maintain process stability.
3. X-Bar and R Charts: The session demonstrates creating an X-Bar and R chart using Minitab to analyze process means and ranges, allowing for effective monitoring and adjustment.
4. Control Limit Adjustment: The instructor explains how to adjust control limits based on desired specifications, emphasizing the importance of recognizing process shifts and variations.
5. Pareto Chart and Distribution Identification: The module covers using Pareto charts to identify main issues and determining data distribution using Minitab for effective process performance evaluation and continuous improvement.
your explanation is perfect. thanks for sharing. Although I am wondering why did you get the small number of likes for that.
Thanks for the explanation. What if the data does not fit both any distribution and transformation.
Thankyou for making these modules. They really enhance my knowledge for using minitab. I worked on pareto charts, box plots and histograms in excel through pivot tables but this seems very convenient in minitab. Will it be possible for you to do a video on Taguchi's DOE in minitab? please
Hi AA, thank you for your kind comments, Taguchi's DOE has been a highly requested video and we would be more than happy to help you guys out its not that different from the factorial and response surface DOE's, stay tuned!
Sincerely, QE NPI Andres R.
This series is great.I’m curious how would you handle a situation when none of the distributions are a good fit for the data? For example all distribution fits have a P value less than 0.05?
Hi! This is a very common question actually. Usually the approach you take depends on what you want to use the data for and how it was acquired.
Case 1: You are executing an exploratory, low risk study. In this case look at your sample size and your normality plots. Why is the data not normal? Is it because there is one major outlier? Or was the data taken from multiple days/machines?
For example if we want to make a decision regarding the average salaries in a certain zip code and we notice mostly everyone earns around 50K USD per year but there are one or two millionaires with 500K per year salaries then we should be fine eliminating these two data points and running our normality test again. Most of the time the data will show normality after cleaning outliers.
Case 2: You are executing a validation study in a regulated industry. In this case a rationale must be provided with a statement similar to the following: "Samples tested have an average result of XX with a standard deviation of XX. Normality tests show the data is not normal and individual distribution identification is unable to determine an ideal fit distribution (All P-values below 0.05).
However the process tolerance is USL 20 and LSL 5. As shown in the following histogram (Insert data histogram) the data is perfectly centered/not in contact with the USL and LSL with an average of 12.5 and a standard deviation of +-1 units.
The complete data range falls within 9 and 15 lbs/f. The risk of creating non-conforming parts is 0%.
Since sample size exceeds 30 samples we shall proceed to analyze it as normal data.
(Option 2). since visual review of the data shows only 1 outlier sample a normality test shall be executed removing the outlier. After outlier removal normality plot shows the data is normal (Include new normality plot). Since data meets normality but we can't remove the outlier point we shall proceed to analyze the Ppk/Cpk requirement using the full data set but assuming normality."
This rationale can be signed off by the staff quality team and manufacturing team to let the validation proceed to the next stage.
If creating a rationale is not possible/allowed (should be but some sites are like that) then the samples must be taken again but with greater care. Make sure you aren't combining different shifts, raw materials or machines.
Also make sure that the machine uses the same parameters, since this is a validation if you need to measure/check your raw material as it comes in request a quality inspector.
Data is normal most of the time but when you mix and match machines/operators/lots you will get non-normal results. There is no mathematical/statistical way to fix/explain this other than a rationale.
We hope this answers your questions, for a more detailed explanation please email us to info@cusum.mx
Sincerely, QE NPI Andres R.
can u make a video on taguchi and rsm comparision
Hi, you have shown examples of variable data. Is it possible to do a video about analyzing attribute data (sorry if I am using the wrong terminology) i.e. defects per unit/item manufactured. How do we tell if the defects are well within the CPK/PPK limits or in other words, the fallout rate is statistically expected and it is not due to any out-of-control process.
Hi Jack! Thank you for your questions,
you can consider defects per unit as a binomial distribution and analyze it as a 2 proportions test (Original lot vs current lot) this will only work for historical analysis. If you want to control the process during manufacturing I would recommend using a control chart to plot the amount of defects please use: Stat > Control Charts > Attribute Charts > P Chart. with this you can monitor defects by using pass/fail criteria (use 0 for pass and 1 for fail), if you want to count the number of defects per lot instead of doing it on a simple pass fail criteria you can use the C Chart. Please let me know if you have any additional questions,
Sincerely,
QE NPI Andres R.
Hi, thanks for your explanation. I got a question: so is the logistic distribution better than the normal distribution in your example? How should we choose the most properate distribution?
Hope you can see my question and reply.
Thanks again.
Hi Tengguang, as long as you get a P value greater than 0.05 on the normality test you can use the normal distribution for your data set, if it is non-normal than you can use the individual data distribution identification tool to determine what distribution best matches the data. Now in theory even if the normality test passes (Greater than 0.05) but there is a distribution that fits the data better with a greater P-value than the normal data distribution then you can also use the alternate non-normal data distribution (it simply has a better fit than the normal distribution).
In summary all distributions with P-values greater than 0.05 are acceptable approximations of the data with the greatest P-value having the best fit.
I hope this helps clear up any confusion,
Best of luck and take care!
Sincerely, QE NPI Andres R.
hola oyes las empresa hoy en día no quieren tener minitab porque las licencias son caras ademas se requiere renovar cada año, hay ejemplos en Excel? me seria mas útil.
gracias
Buenas tardes Miguel! Claro que si se puede realizar todo esto y más en Excel! Con mucho gusto compartimos unos videos usando Excel para el control de Calidad. ¿Los temas de este modulo son los más útiles para ti? También lo podemos hacer para los otros módulos que ya se han publicado o con gusto podemos tomar tus sugerencias y crear más temas aun! Saludos y muchas gracias por subscribirte al canal!
QI macros es util en excel y no es costoso, pero Los graficos no se comparan con minitab.
I am from India and only 80 years old. Your presentation is excellent. Will it be possible for you to throw some light on Control charts for service industries? I am sending a separate mail to info@cusum.mix.Regards, Ramesh Parkhi
Good evening Ramesh we would be more than happy to assist you and welcome all suggestions. Please note that no email has been received yet, please confirm the email address (Its info@cusum.mx, there appears to be a typo addressed at .mix)
Best of luck!
Sincerely, QE NPI Andres R.