Process Validation Analysis Tools

Author: Chandra Kishra

a. Acceptance Sampling Plan – An acceptance sampling plan takes a sample of product and uses this sample to make an accept or reject decision. Acceptance sampling plans are commonly used in manufacturing to decide whether to accept (release) or to reject (hold) lots of product. However, they can also be used during validation to accept (pass) or to reject (fail) the process. Following the acceptance by a sampling plan, one can make a confidence statement such as: "With 95% confidence, the defect rate is below 1% defective."

b. Analysis of Means (ANOM) – Statistical study for determining if significant differences exist between cavities, instruments, etc. It has many uses including determining if a measurement device is reproducible with respect to operators and determining if differences exists between fill heads, etc. Simpler and more graphical alternative to Analysis of Variance (ANOVA) .

c. Analysis of Variance (ANOVA) – Statistical study for determining if significant differences exist between cavities, instruments, etc. Alternative to Analysis of Means (ANOM).

d. Capability Study – Capability studies are performed to evaluate the ability of a process to consistently meet a specification. A capability study is performed by selecting a small number of units periodically over time. Each period of time is called a subgroup. For each subgroup, the average and range is calculated. The averages and ranges are plotted over time using a control chart to determine if the process is stable or consistent over time. If so, the samples are then combined to determine whether the process is adequately centered and the variation is sufficiently small. This is accomplished by calculating capability indexes. The most commonly used capability indices are Cp and Cpk. If acceptable values are obtained, the process consistently produces product that meets the specification limits. Capability studies are frequently towards the end of the validation to demonstrate that the outputs consistently meet the specifications. However, they can also be used to study the behavior of the inputs in order to perform a tolerance analysis.

e. Challenge Test – A challenge test is a test or check performed to demonstrate that a feature or function is working. For example, to demonstrate that the power backup is functioning, power could be cut to the process. To demonstrate that a sensor designed to detect bubbles in a line works, bubbles could be purposely introduced.

f. Component Swapping Study – Study to isolate the cause of a difference between two units of product or two pieces of equipment. Requires the ability to disassemble units and swap components in order to determine if the difference remains with original units or goes with the swapped components.

g. Control Chart – Control charts are used to detect changes in the process. A sample, typically consisting of 5 units, is selected periodically. The average and range of each sample is calculated and plot. The plot of the averages is used to determine if the process average changes. The plot of the ranges is used to determine if the process variation changes. To aid in determining if a change has occurred, control limits are calculated and added to the plots. The control limits represent the maximum amount that the average or range should vary if the process does not change. A point outside the control limits indicates that the process has changed. When a change is identified by the control chart, an investigation should be made as to the cause of the change. Control charts help to identify key input variables causing the process to shift and aid in the reduction of the variation. Control charts are also used as part of a capability study to demonstrate that the process is stable or consistent.

h. Designed Experiment – The term designed experiment is a general term that encompasses screening experiments, response surface studies, and analysis of variance. In general, a designed experiment involves purposely changing one or more inputs and measuring the resulting effect on one or more outputs.

i. Dual Response Approach to Robust Design – One of three approaches to robust design. Involves running response surface studies to model the average and variation of the outputs separately. The results are then used to select targets for the inputs that minimize the variation while centering the average on the target. Requires that the variation during the study be representative of long term manufacturing. Alternatives are Taguchi methods and robust tolerance analysis.

j. Failure Modes and Effects Analysis (FMEA) – An FMEA is systematic analysis of the potential failure modes. It includes the identification of possible failure modes, determination of the potential causes and consequences and an analysis of the associated risk. It also includes a record of corrective actions or controls implemented resulting in a detailed control plan. FMEAs can be performed on both the product and the process. Typically an FMEA is performed at the component level, starting with potential failures and then tracing up to the consequences. This is a bottom up approach. A variation is a Fault Tree Analysis, which starts with possible consequences and traces down to the potential causes. This is the top down approach. An FMEA tends to be more detailed and better at identifying potential problems. However, a fault tree analysis can be performed earlier in the design process before the design has been resolved down to individual components.

k. Fault Tree Analysis (FTA) – A variation of a FMEA. See FMEA for a comparison.

l. Gauge R&R Study – Study for evaluating the precision and accuracy of a measurement device and the reproducibility of the device with respect to operators. Alternatives are to perform capability studies and analysis of means on measurement device. m. Mistake Proofing Methods – Mistake proofing refers to the broad array of methods used to either make the occurrence of a defect impossible or to ensure that the defect does not pass undetected. The Japanese refer to mistake proofing as Poka-Yoke. The general strategy is to first attempt to make it impossible for the defect to occur. For example, to make it impossible for a part to be assembled backwards, make the ends of the part different sizes or shapes so that the part only fits one way. If this is not possible, attempt to ensure the defect is detected. This might involve mounting a bar above a chute that will stop any parts that are too high from continuing down the line. Other possibilities include mitigating the effect of a defect (seat belts in cars) and to lessen the chance of human errors by implementing self-checks.

n. Multi-Vari Chart – Graphical procedure for isolating the largest source of variation so that further efforts concentrate on that source.

o. Response Surface Study – A response surface study is a special type of designed experiment whose purpose is to model the relationship between the key input variables and the outputs. Performing a response surface study involves running the process at different settings for the inputs, called trials, and measuring the resulting outputs. An equation can then be fit to the data to model the affects of the inputs on the outputs. This equation can then be used to find optimal targets using robust design methods and to establish targets or operating windows using a tolerance analysis. The number of trials required by a response surface study increases exponentially with the number of inputs. It is desirable to keep the number of inputs studied to a minimum. However, failure to include a key input can compromise the results. To ensure that only the key input variables are included in the study, a screening experiment is frequently performed first.

p. Robust Design Methods – Robust design methods refers collectively to the different methods of selecting optimal targets for the inputs. Generally, when one thinks of reducing variation, tightening tolerances comes to mind. However, as demonstrated by Taguchi, variation can also be reduced by the careful selection of targets. When nonlinear relationships between the inputs and the outputs, one can select targets for the inputs that make the outputs less sensitive to the inputs. The result is that while the inputs continue to vary, less of this variation is transmitted to the output causing the output to vary less. Reducing variation by adjusting targets is called robust design. In robust design, the objective is to select targets for the inputs that result in on-target performance with minimum variation. Several methods of obtaining robust designs exist including robust tolerance analysis, dual response approach and Taguchi methods.

q. Robust Tolerance Analysis – One of three approaches to robust design. Involves running a designed experiment to model the output’s average and then using the statistical approach to tolerance analysis to predict the output’s variation. Requires estimates of the amounts that the inputs will vary during long-term manufacturing. Alternatives are Taguchi methods and the dual response approach.

r. Screening Experiment – A screening experiment is a special type of designed experiment whose primary purpose is to identify the key input variables. Screening experiments are also referred to as fractional factorial experiments or Taguchi L-arrays. Performing a screening experiment involves running the process at different settings for the inputs, called trials, and measuring the resulting outputs. From this, it can be determined which inputs affect the outputs. Screening experiments typically require twice as many trials as input variables. For example, 8 variables can be studied in 16 trials. This makes it possible to study a large number of inputs in a reasonable amount of time. Starting with a larger number of variables reduces the chances of missing an important variable. Frequently a response surface study is performed following a screening experiment to gain further understanding of the affects of the key input variables on the outputs.

s. Taguchi Methods – One of three approaches to robust design. Involves running a designed experiment to get a rough understanding of the effects of the input targets on the average and variation. The results are then used to select targets for the inputs that minimize the variation while centering the average on the target. Similar to the dual response approach except that while the study is being performed, the inputs are purposely adjusted by small amounts to mimic long-term manufacturing variation. Alternatives are the dual response approach and robust tolerance analysis.

t. Tolerance Analysis – Using tolerance analysis, operating windows can be set for the inputs that ensure the outputs will conform to requirements. Performing a tolerance analysis requires an equation describing the effects of the inputs on the output. If such an equation is not available, a response surface study can be performed to obtain one. To help ensure manufacturability, tolerances for the inputs should initially be based on the plants and suppliers ability to control them. Capability studies can be used to estimate the ranges that the inputs currently vary over. If this does not result in an acceptable range for the output, the tolerance of at least one input must be tightened. However, tightening a tolerance beyond the current capability of the plant or supplier requires that improvements be made or that a new plant or supplier selected. Before tightening any tolerances, robust design methods should be considered.

u. Variance Components Analysis – Statistical study used to estimate the relative contributions of several sources of variation. For example, variation can on a multi-head filler could be the result of shifting of the process average over time, filling head differences and short-term variation within a fill head. A variance components analysis can be used to estimate the amount of variation contributed by each source.

## No comments:

Post a Comment