Thursday, June 25, 2020

Measurement System Analysis - Free Essay Example

Section 24: Measurement System Analysis Introduction to MSA The requirements of measurement systems Variable MSA à ¢Ã¢â€š ¬Ã¢â‚¬Å" Gage RR MSA graphing Attribute Measurement System Calibration of Measurement Systems Features of a good measurement system Accuracy Repeatability Linear à ¢Ã¢â€š ¬Ã¢â‚¬Å" produce accurate and consistent results Reproducibility Stability Introduction to MSA It is important that before you try to understand the root cause of a process or problem that you ensure the data you are using is reliable. The measurement system that is used will have a large impact on the data that is gathered. If the measurement system is not reliable it may introduce errors and bias into the data. A measurement system is the whole approach to the gathering the data in the measure stage. This could include factors such as; people, tools, standards, training and procedures. By using Measurement System Analysis we can identify the sources of error in the data. Some MSA definitions Bias: The difference between the average measured value and a standard. Repeatability: The variability resulting from successive trials using the same equipment. Reproducibility: Variation in the average of the measurements taken by different people. Accuracy or Precision: This is concerned with the correctness of the average reading. The accuracy relates to the level to which the average matches with the true value. Measurement System errors fall into two categories; Bias errors and Precision errors Bias Errors These errors shift the data from the measuring so that it is consistently a set distance from the mean. This is shown in the diagram above. Some examples of bias error are outlined below; A petrol pump is incorrectly calibrated by 1 litre so for every sale 1 litre less is pumped into the cars. A set of scales in fishmongers is incorrect. Each fish sold is incorrectly weighed by 10 grams less than it should be. The clock in a dentist is 5 minutes slow meaning that all dentist appointments will be 5 minutes later than their due time. Precision Errors These errors do not happen in the same way each time and so add a greater level of variation into the data. They are often related to the human interaction on a process; such as people measuring in different ways or taking shortcuts with process steps. There are two categories of precision errors; Repeatability: the variation caused by the measuring device. When the same operator measures an output with the same measuring device the variation from this is one of repeatability. Reproducibility: the variation caused by different people undertaking the measures. Different operators may measure the same parts in different ways leading to variation in the measurements. How to assess Precision Errors Gauge RR à ¢Ã¢â€š ¬Ã¢â‚¬Å" Gauge Repeatability and Reproducibility A gauge RR assessment of a process will enable the level of precision error to be quantified. The final produce that is being measured is always stable, therefore any variation in the results are due to repeatability or reproducibility in the measurement process. By undertaking a Gauge RR assessment we are presented with a percentage score. This illustrates what level of variation is due to either reproducibility or repeatability with the gauge. Although the scores are often assessed in relative terms; for example the level of one is greater than the other indicating that the focus should be there. Some absolute value acceptability criteria is also outlined below for the results. Marginal Good Excellent ÃÆ'†¹Ãƒ ¢Ã¢â€š ¬Ã… ¡30% ÃÆ'†¹Ãƒ ¢Ã¢â€š ¬Ã… ¡20% ÃÆ'†¹Ãƒ ¢Ã¢â€š ¬Ã… ¡10% Gauge RR for Continuous Data Suppose we ask 3 people to measure the height of 10 Christmas Trees 3 times. A sample of the data is provided below. So we now have 90 measures by the 3 individuals. From this we need to determine if the level of variation in the measurement system is acceptable for the data. We can determine the gauge levels for repeatability and reproducibility using the Minitab system. The first output from this is shown below. Components of variation: Columns 2 and 3 show the level of repeatability and reproducibility in the data. These results show that the level of reproducibility is higher than repeatability, which is within acceptable tolerances. Therefore improvements should look at the individuals to reduce the variation in their measures and not so much the equipment they are using. R Chart by Person: This is a run or control chart showing the sample range measures taken by the 3 people of the 10 different Christmas Tree. It gives an indication as to whether the operators are measuring consistently. As there is a lot of variation in the sample range measures this indicates there a high level of variation in the measures taken by the individuals. This reflects the findings of the first chart. X bar Chart by Person: This run or control chart showing the actual measurements of the 10 Christmas Trees by the 3 people. Measurement by Tree Reference: This shows the 3 measurements of each of the trees. This can show if there was more variation in specific parts than others. S for example, in this example there was a high level of variation in the measurement of tree 10 but very little for tree 2. You may want to investigate the reasons for this further. There may be some reason for the difference and there could be some adaptations that could be adopted from those trees that are easily measured. Measurement by Person: This shows the range of different measures from each person. Therefore it is giving a more detailed illustration of reproducibility. In the example we can see that Person 2 has lower average measurements that 1 and 3. Also person 3 has a lower range and variation in their measurements. It may be an idea to see if the technique of person 3 was different to that of 1 and 2. Tree Reference * Person Interaction: This shows the different measures for each tree for each appraiser. From this it is possible to identify if the measures of an appraiser is a lot different to others. This may have a large impact on the results of the analysis. Session Window Output Source DF SS MS F P Tree referen 9 238770 26530.0 27.1453 0.000 Person 2 7301 3650.7 3.7354 0.044 Tree referen * Person 18 17592 977.3 15.4777 0.000 Repeatability 60 3789 63.1 Total 89 267452 %Contribution Source VarComp (of VarComp) Total Gage RR 456.99 13.86 Repeatability 63.14 1.92 Reproducibility 393.84 11.95 Person 89.11 2.70 Person*Tree referen 304.73 9.24 Par t-To-Part 2839.18 86.14 Total Variation 3296.17 100.00 Process tolerance = 290 StudyVar %StudyVar %Tolerance Source StdDev (SD) (6*SD) (%SV) (SV/Toler) Total Gage RR 21.3772 128.263 37.23 44.23 Repeatability 7.9463 47.678 13.84 16.44 Reproducibility 19.8454 119.073 34.57 41.06 Person 9.4399 56.639 16.44 19.53 Person*Tree referen 17.4565 104.739 30.41 36.12 Part-To-Part 53.2840 319.704 92.81 110.24 Total Variation 57.4123 344.474 100.00 118.78 Number of Distinct Categories = 3 As the highlighted values are less than 0.05 this indicates that the Person does have an affect on the result. These figures are also reflected in the earlier graphs. The GRR accounts for 37.23% of the total variation and 44.23% of the tolerance. The figures also show that Reproducibility is a bigger factor than repeatability in variance of the measures. The figure illustrates how many distinct categories the measurement system is capable of handling. A standard figure is 5 and so any result below this indicates improvement in the measuring system is required. Gauge RR for Attribute Data Gauge RR analysis can also be used for attribute data; where data has been classified. For example a number of individuals may have been used to classify data, such as correct/incorrect, pass/fail or classification of colour. The Gauge RR analysis is again used to determine the level of reproducibility and repeatability. The graphical output for the Attribute GRR assessment is shown above. The Within Appraisers chart shows the level of repeatability between the 3 appraisers. It illustrates that the most consistent appraiser is Phil, who reached the same decision on 100% of the pieces he tested. However Peter is the most inconsistent. The second chart; Appraiser vs Standard compares how the individuals fared against the standard results. The results of Phil most closely match that of the standard (around 96% of the time) whilst Peters only matched the standard in 72% of measures. The Session Window Output is outlined below. Each Appraiser vs Standard Assessment Agreement Appraiser #Inspected #Matched Percent 95% CI Peter 25 18 72.00 (50.61, 87.93) Pam 25 21 84.00 (63.92, 95.46) Phil 25 24 96.00 (79.65, 99.90) # Matched: Appraisers assessment across trials agrees with the known standard. This data shows the level of agreement with the standard for each appraiser. Between Appraisers Assessment Agreement #Inspected #Matched Percent 95% CI 25 18 72.00 (50.61, 87.93) This data shows the level of agreement between the appraisers. In this case out of 25 pieces there was full agreement in 18 or 72%. All Appraisers vs Standard Assessment Agreement #Inspected #Matched Percent 95% CI 25 17 68.00 (46.50, 85.05) This data shows the level of agreement between the appraisers. In this case out of 25 pieces there was full agreement with the standard in 17 or 68%. MSA in practice You may not have the data to undertake a Minitab analysis of the level of Gauge RR in the measuring system. However the principles can still be applied. When measuring any process steps should be taken to ensure that the people measuring use the same tools and measure the process in the same way. For example, start and end points for measures should be agreed between the people doing the measuring.