general strategies for validation extensions A guide for testing biopharmaceuticals — Part I B


The AMT reports should include descriptive statistics (means, standard deviations and coefficients of variance), comparative statistics (ANOVA ρ-values) for inter-laboratory results, and the differences-of-mean values for both (or each) laboratories. Each report documents evidence that the transferred test method is suitable (qualified) for testing at the receiving laboratory.
For cases where the ANOVA ρ-value is less than 0.05, secondary acceptance criteria should be established for the comparison-of-means and variability of the results to demonstrate the overall lab-to-lab reproducibility of test results. It is advisable to include a numerical fall-back limit (or percentage) because the likelihood of observing statistical differences may increase with the precision of the test method. In addition, some differences (bias) between instruments, operator performances and days are expected.9 We should tailor our acceptance criteria for overall (intermediate) precision and for the maximum tolerated difference between mean laboratory results (accuracy or matching) to minimize the likelihood of obtaining OOS results (2A and 2B) or 1B results.9 The setting and justification of all acceptance criteria must strike a balance and is a critical part of each protocol. A detailed AMT case study was presented elsewhere.9
Analytical method comparability
As we need to demonstrate equality or improvement whenever approved methods are replaced, which and how are method performance characteristics compared? Table 2 provides guidance on which validation characteristics to use for comparability protocols for each assay type per ICHQ 2A/B. All qualitative tests should contain a comparison of hit-to-miss ratios (for "specificity") between the approved method and the new method. If a qualitative limit test is exchanged, the detection limit (DL) of the new method should be compared and should be equal or lower for the new method. For all quantitative methods, the method performance characteristics accuracy and precision (intermediate precision) should be compared.13 It is of great regulatory concern whether results may change overall by drifting (change in "accuracy" or "matching") or by an increase in data spreading ("intermediate precision"). An increase in data spreading or lack of precision will mostly increase the likelihood of observing cases 1B or 2B and should be avoided. A drift in results or "lack of matching" may require a change in the specifications. This drift in release results can occur in two directions, lower or higher results. Both directions are not acceptable outcomes for the demonstration of accuracy, and testing for equivalence between methods is therefore correct.13 The goal of comparisons of other characteristics (e.g., DL) is different as at least two outcomes are acceptable that is to be equivalent or superior meaning a lower DL.13 A third comparison category is the demonstration of non-inferiority and is usually the easiest to pass for comparability. However, we should keep in mind that the use of any of the comparability categories (non-inferiority, equivalence, superiority) using ICH E9 and Committee for Proprietary Medicinal Products (CPMP) guidance documents should be properly chosen and justified.13,17–19 In other words, non-inferiority testing may be justified for the comparison of some primary characteristic such as DL if other secondary criteria (e.g., increased number of tests or test samples) can compensate for the small level of inferiority of the primary comparison characteristic.13
Quantitative limits (QLs) could also be compared. However, both QLs would have to be estimated by the same principle (e.g., estimated by regression analysis). A low QL is desirable as it will let us quantitatively report and monitor low-value results by SPC. There are several ways to compare QLs. For example, we could compare the regression coefficients of both linear assay response curves to estimate both QLs and would also get a general idea how accuracy and precision characteristics compare over the assay range. Table 2 constitutes a general guidance. Particular examples for non-inferiority, equivalence, and superiority testing to demonstrate method comparability were provided and discussed elsewhere.13
No matter which comparability category we may use for a statistical comparison (with ρ=0.05), a protocol should provide the design of experiments to be done and the pre-specified value for the allowable difference in results. The pre-specified maximum allowable difference is illustrated in CPMP's Points to Consider On The Choice Of Non-Inferiority Margin.19 The allowable difference should be set similar to AMV/AMT. The difference should be set and justified by relating specifications to SPC data and with consideration of the likelihood of observing any of the four cases (1A, 1B, 2A, and 2B).13
Dr Stephan O. Krause is manager, QC analytical services and compendial liason at Bayer HealthCare LLC, USA.
References
1. Guideline for Industry Text on Validation of Analytical Procedures, ICH Q2A, 60, 1995. http://www.fda.gov/cder/guidance/ichq2a.pdf2. Guidance for Industry Q2B Validation of Analytical Procedures: Methodology, ICH Q2B, 62, 1996. http://www.fda.gov/cder/guidance/1320fnl.pdf
3. The Fitness for Purpose of Analytical Method, Eurachem, Teddington, UK (1998). http://www.eurachem.ul.pt/guides/valid.pdf
4. Traceability in Chemical Measurement, Eurachem/CITAC, Teddington, UK (2003). http://www.eurachem.ul.pt/guides/EC_Trace_2003.pdf
5. Guidance for Industry, Bioanalytical Method Validation (2001). http://www.fda.gov/CDER/GUIDANCE/4252fnl.htm
6. Draft Guidance for Industry, Analytical Procedures and Methods Validation (2000). http://www.fda.gov/cder/guidance/2396dft.htm
7. Technical Report 33, Evaluation, Validation and Implementation of New Microbiological Testing Methods (PDA, Bethesda, MD, USA).
8. Alternative Methods For Control of Microbiological Quality, EP. Supplement 5.5 [07/2006:50106] (December 2005). http://www.pheur.org/
9. S.O. Krause, BioPharm International 17(3), 28–36 (2004).
10. S.O. Krause, BioPharm International 17(10), 52–61 (2004).
11. S.O. Krause, BioPharm International 17(11), 46–52 (2004).
12. S.O. Krause, BioPharm International Validation Guide, a supplement to BioPharm International 18(3) (2005).
13. S.O. Krause, BioPharm International Guide to Bioanalytical Advances, a supplement to BioPharm International 18(9) (2005).
14. S.O. Krause, BioPharm International 18(10), 52–59 (2005).
15. Guidance for Industry PAT — A Framework for Innovative Pharmaceutical Development, Manufacturing, and Quality Assurance (2004). http://www.fda.gov/cder/guidance/6419fnl.pdf
16. ISPE Good Practice Guide: Technology Transfer (International Society for Pharmaceutical Engineering, Tampa, FL, 2003).
17. Statistical Principles for Clinical Trials, ICH E9 (1998). http://www.emea.eu.int/pdfs/human/ich/036396en.pdf
18. Points to Consider on Switching Between Superiority and Non-Inferiority, CPMP (2000). http://www.emea.eu.int/pdfs/human/ewp/048299en.pdf
19. Points to Consider On The Choice Of Non-Inferiority Margin, CPMP (2004). home.att.ne.jp/red/akihiro/emea/215899en_ptc.pdf

No comments: