ABSTRACT
Several gaps in current regulatory guidelines that govern the analytical method life cycle for the testing of biopharmaceuticals are identified. Strategic guidance on how to monitor and control the life cycle of an analytical test method is provided in this article. Analytical method transfer, analytical method component equivalency, and analytical method comparability protocols are discussed in light of risk-based strategies for validation extensions. The use of an analytical method maintenance program is suggested to control over time the predictable risk to patients and firm.
The successful completion of analytical method transfer (AMT) is a regulatory expectation for the extension of the validation status to other laboratories. The demonstration of equivalent test results and, therefore, an acceptable level of reproducibility when testing at a different location, can limit the potential risk to the patient (hence the regulatory expectation). Acceptable reproducibility also limits the risk of failing test results for the biopharmaceutical firm as established probabilities of passing specifications can be maintained. Similar, postvalidation changes in method components should be monitored and controlled to avoid significant (negative) changes for material or product release probabilities.
This two-part series focuses on all postvalidation work that may be required to ensure process and product quality over time. This article discusses practical concepts on how to ensure successful validation extensions. Part II, to be published in the October issue, will include practical tools to ensure a validation continuum (maintenance) for validated methods. The second part will also include case studies for deriving meaningful and risk-based acceptance criteria for validation extensions and validation maintenance and will, furthermore, include a case study on how to reduce analytical variability in validated systems.
When replacing approved test methods with improved ones, analytical method comparability (AMC) data should be submitted together with the method description and validation results.13 Once a method is approved and in routine use, it should be maintained in an analytical method maintenance (AMM) program that can be administered through the validation master plan (VMP).14 If done well, this will ensure—like all postvalidation activities—consistent (accurate and precise) production process and product quality measurements.14 What exactly are the critical elements of good validations, validation extensions, or suitable validation maintenance? The answer lies mostly in the preset acceptance criteria for method performance and, of course, the actual validation results obtained. For example, if changes in analytical method components cause a change in test results and, therefore, in process or product quality measurements, we should capture when we will have exceeded method suitability limits. In other words, if the analytical method change will cause a predictable shift or spread of results with respect to specification(s), and therefore negatively impact the probability of releasing material, we should monitor this. To monitor and possibly compensate, we must first set reasonable suitability limits, then continuously control the overall method performance. Often, the most difficult part may be to estimate the associated risk of changed results with respect to both patient and firm, and from this, to set reasonable acceptance criteria. Once we truly understand why and when there will be a need for method improvement, we will likely know what should be done to compensate for the difference.
Each production process has an associated probability for the rate of rejections that can be readily calculated by relating specifications and production process performance. However, instead of having to deal with only two probabilities (pass or reject) that are "visible" and monitored by statistical process control (SPC), we should consider two additional possibilities for all reported results. Therefore, there is a total of four possible cases for releasing product or material, of which three should be avoided as often as practically possible. The four cases for reported test results are illustrated here.
Measured results are within established specifications.
Measured results are outside established specifications (OOS).
Cases 1A and 2A are routinely monitored by SPC. Cases 1B and 2B originate from other uncertainties such as imperfect test method performance or poor sampling and are not readily visible by SPC. Cases 2A and 2B are obviously not desirable because the firm cannot process nor sell this product or material. Case 1B constitutes a risk primarily to patients, but also means a risk to the firm if adverse product-related reactions or over- or under-dosing would actually occur. Case 2B constitutes a loss solely to the firm and should also be avoided mainly for profit reasons although other problems may also arise from this situation.
For our validation extension acceptance criteria, we should primarily set acceptable protocol limits from SPC with relation to specifications. We should consider the likelihood and impact for cases 1B and 2B, and avoid as much as possible measurement errors as part of the AMM program. Inaccurate or imprecise measurements will always cause a lower than ideal probability of observing results within specifications. The acceptance criteria for AMV and its continuum requirements must, therefore, ensure the low likelihood for all cases but 1A.To meaningfully estimate risk to patients and the firm, we must understand our process data and integrate test measurement aspects into our risk-based validation strategies. Good risk management tools will dictate how much assay performance characteristics can deviate from the ideal. This will then set limits on how much we can tolerate over time for a test method to deviate from ideal (100% accurate and precise). It should also now become apparent why it is so important to maintain our validation status with an AMM program. When this is ignored, we negatively affect all four cases. (Negative here means increased risk to patient or firm). Although undetected, negative effects will occur for the "invisible" cases 1B and 2B because measurement errors are not captured by regular SPC. This may also cause the lack of process understanding and control, and may also lead to conflicts with current regulatory expectations (process analytical technology [PAT]) and may impact a firm's profits in the long run.15
ANALYTICAL METHOD TRANSFER
The AMT reports should include descriptive statistics (means, standard deviations and coefficients of variance), comparative statistics (ANOVA ρ-values) for inter-laboratory results, and the differences-of-mean values for both (or each) laboratories. Each report documents evidence that the transferred test method is suitable (qualified) for testing at the receiving laboratory.
For cases where the ANOVA ρ-value is less than 0.05, secondary acceptance criteria should be established for the comparison-of-means and variability of the results to demonstrate the overall lab-to-lab reproducibility of test results. It is advisable to include a numerical fall-back limit (or percentage) because the likelihood of observing statistical differences may increase with the precision of the test method. In addition, some differences (bias) between instruments, operator performances and days are expected.9 We should tailor our acceptance criteria for overall (intermediate) precision and for the maximum tolerated difference between mean laboratory results (accuracy or matching) to minimize the likelihood of obtaining OOS results (2A and 2B) or 1B results.9 The setting and justification of all acceptance criteria must strike a balance and is a critical part of each protocol. A detailed AMT case study was presented elsewhere.9
ANALYTICAL METHOD COMPARABILITY
Quantitative limits (QLs) could also be compared. However, both QLs would have to be estimated by the same principle (e.g., estimated by regression analysis). A low QL is desirable as it will let us quantitatively report and monitor low-value results by SPC. There are several ways to compare QLs. For example, we could compare the regression coefficients of both linear assay response curves to estimate both QLs and would also get a general idea how accuracy and precision characteristics compare over the assay range. Table 2 constitutes a general guidance. Particular examples for noninferiority, equivalence, and superiority testing to demonstrate method comparability were provided and discussed elsewhere.13
No matter which comparability category we may use for a statistical comparison (with ρ = 0.05), a protocol should provide the design of experiments to be done and the pre-specified value for the allowable difference in results. The prespecified maximum allowable difference is illustrated in CPMP's Points to Consider On The Choice Of Non-Inferiority Margin.19 The allowable difference should be set similar to AMV or AMT. The difference should be set and justified by relating specifications to SPC data and considering the likelihood of observing any of the four cases (1A, 1B, 2A, and 2B).13
Stephan O. Krause, PhD, is the manager of QC Technical Services and Compendial Liaison at Bayer HealthCare , LLC, Berkeley, CA 94701, tel. 510.705.4191, stephan.krause.b@bayer.com
REFERENCES
1. Guideline for Industry Text on Validation of Analytical Procedures, ICH Q2A, 60, 1995. http://www.fda.gov/cder/guidance/ichq2a.pdf2. Guidance for Industry Q2B Validation of Analytical Procedures: Methodology, ICH Q2B, 62, 1996. http://www.fda.gov/cder/guidance/1320fnl.pdf
3. The Fitness for Purpose of Analytical Method, Eurachem, Teddington, UK (1998). http://www.eurachem.ul.pt/guides/valid.pdf
4. Traceability in Chemical Measurement, Eurachem/CITAC, Teddington, UK (2003). http://www.eurachem.ul.pt/guides/EC_Trace_2003.pdf
5. Guidance for Industry, Bioanalytical Method Validation (2001). http://www.fda.gov/CDER/GUIDANCE/4252fnl.htm
6. Draft Guidance for Industry, Analytical Procedures and Methods Validation (2000). http://www.fda.gov/cder/guidance/2396dft.htm
7. Technical Report 33, Evaluation, Validation and Implementation of New Microbiological Testing Methods (PDA, Bethesda, MD, USA).
8. Alternative Methods For Control of Microbiological Quality, EP. Supplement 5.5 [07/2006:50106] (December 2005). http://www.pheur.org/
9. S.O. Krause, BioPharm Int. 17(3), 28–36 (2004).
10. S.O. Krause, BioPharm Int. 17(10), 52–61 (2004).
11. S.O. Krause, BioPharm Int. 17(11), 46–52 (2004).
12. S.O. Krause, BioPharm International Validation Guide, a supplement to BioPharm Int. 18(3) (2005).
13. S.O. Krause, BioPharm International Guide to Bioanalytical Advances, a supplement to BioPharm Int. 18(9) (2005).
14. S.O. Krause, BioPharm Int. 18(10), 52–59 (2005).
15. Guidance for Industry PAT — A Framework for Innovative Pharmaceutical Development, Manufacturing, and Quality Assurance (2004). http://www.fda.gov/cder/guidance/6419fnl.pdf
16. ISPE Good Practice Guide: Technology Transfer (International Society for Pharmaceutical Engineering, Tampa, FL, 2003).
17. Statistical Principles for Clinical Trials, ICH E9 (1998). http://www.emea.eu.int/pdfs/human/ich/036396en.pdf
18. Points to Consider on Switching Between Superiority and Non-Inferiority, CPMP (2000). http://www.emea.eu.int/pdfs/human/ewp/048299en.pdf
19. Points to Consider On The Choice Of Non-Inferiority Margin, CPMP (2004). home.att.ne.jp/red/akihiro/emea/215899en_ptc.pdf
No comments:
Post a Comment