Method Validation Guidelines

The objective of validation of an analytical procedure is to demonstrate that it is suitable for its intended purpose" (International Conference on Harmonisation Guideline Q2A).1 "Methods validation is the process of demonstrating that analytical procedures are suitable for their intended use" (US Food and Drug Administration Draft Guidance for Industry, 2000 ).2

So, is your assay fit for the job ?

Validated analytical test methods are required by good manufacturing practice (GMP) regulations for products that have been authorized for sale and almost certainly for late-stage trial clinical material.3 Also, some methods used during the pre-clinical phase of drug development under good laboratory practice (GLP) regulations may also require validation.4

However, during the development of biopharmaceuticals, methods may be employed that may not need full validation — for example, those used only for process validation or comparability studies. The various terms applied to the "not-quite-validation" of such methods include "test characterization," "qualified method," and "validated for Phase I." Considerable confusion has arisen over this topic, which has been the subject of several articles and numerous presentations at conferences. This article also seeks to explain and clarify the situation.

In basic terms, a suitable method must be based on firm scientific principles; capable of providing the necessary sensitivity, accuracy, precision, etc.; and capable of generating reliable results. During test development, we learn more about the ability of a test to meet these requirements, and we decide whether the test is going to meet suitability standards. The key questions to be answered are:

  • What is the test objective?
  • What performance characteristics must we look at?
  • Is performance acceptable for the application?
  • Can we have confidence in the results?
  • How can we demonstrate assay validity?

The final "full" validation of a method, especially a biological assay, requires assembly of a significant amount of data on the test's performance, so that the results produced can be subjected to statistical analysis and the variability of the results documented. This can be an expensive, time-consuming procedure, and one would not want to go through the process until it was absolutely necessary.

Regulatory Requirements and Guidelines

Current regulations with a bearing on this subject are listed in Table 1. The requirement to ensure adequate validation of analytical methods originated in the US as a draft Subpart I of 21 CFR 211 in 1996. This stated "Methods Validation. The accuracy, sensitivity, specificity and reproducibility of test methods used by a manufacturer shall be validated and documented. Such validation and documentation shall be accomplished in accordance with Section 211.194(a)(2)."5 This latter section covers the laboratory records on test methods, both compendial and non-compendial. It specifically requires that "The statement [of each method used in the testing of the sample] shall indicate the location of the data that establish that the methods used in the testing of the sample meet proper standards of accuracy and reliability as applied to the product tested. . . . The suitability of all testing methods used shall be verified under actual conditions of use."6

Although Subpart I has not yet been formally incorporated into GMP regulations, which are currently under review, the Food and Drug Administration (FDA) has issued guidelines that define the agency's expectations in this field. Initially, the FDA published those sections of the International Conference on Harmonisation (ICH) Guidelines dealing with analytical procedures that had been recommended for adoption by the parties to ICH. These were: Q2A Text for Validation of Analytical Procedures, October, 1994 1 and Q2B Validation of Analytical Procedure: Methods, November, 1996.7 Document Q2A lays down broad guidelines on the types of tests that require validation, while Guideline 2B gives additional details on validating various test characteristics. The European Union, Japan, Canada, and Australia are among the parties that have also adopted these ICH guidelines.

FDA issued two further documents. The first was "Guidance for Industry: Analytical Procedures Validation," in August 2000.8 This details analytical methods and procedures validation required for the chemistry, manufacturing, and controls (CMC) documentation for each investigational new drug (IND), new drug application (NDA), and biologic license application (BLA), with a description of the validation package that should be submitted with each type of application.

The FDA's "Guidance for Industry: Bioanalytical Method Validation" followed in May, 2001.9 This discusses the development of methods for chemical, microbiological, and ligand-binding assays of drug products and their metabolites in "biological matrices" such as blood, serum, plasma, or urine. These would be required for pharmacological and bioavailability studies on drugs. The guidance also considers the application of validated methods to routine product analysis. More recent FDA guidelines incorporate recommendations contained in earlier ICH documents. They form the basis for the advice on analytical method validation in this article.

However, we should also take note of the new Q9 guideline on risk management, which states that risk assessment techniques should be used "To identify the scope and extent of verification, qualification and validation activities (e.g., analytical methods, processes, equipment and cleaning methods) using worst case approach."10

Qualification, Validation and Verification

So what does this mean in real terms? The need to characterize the product more stringently and to use validatable production and testing methods increases with the progress of the drug's development. In pre-clinical testing and Phase I and early Phase II studies, it is possible to use analytical methods that are not fully validated, but "qualified." A qualified method is one for which there is insufficient knowledge of the test's performance to document full validation, but some effort has been made to determine the method's reliability and how to control variability — a performance assessment.

Proper test qualification follows a protocol similar to full validation, but over a shorter time frame and with only a few parameters tested. If the assay is to be used in commercial production, however, it is essential that data developed during test qualification studies can be applied to eventual validation studies. Alternatively, it could be that qualification work will indicate that the method is not unsuitable — another valuable result.

A set of rules must be established for control and use of test qualification. Apart from ensuring that the test method is clearly described in a standard operating procedure (SOP), it must be subject to the same change control rules as a validated test; most importantly, the SOP should specify when the qualified method can be used — for example "for the assay of Phase I clinical lots only." Because there will be only limited experience of the test's performance, it is desirable that it be performed by an experienced analyst capable of identifying potential risks in the procedure and knowledgeable regarding how to ensure that test variability is minimal. Proper calibration of apparatus and standards should be routine in a qualified test.

By the time one reaches Phase III clinical trials, authorities expect that processes and test methods are those that will be used to manufacture the product for sale. In fact, it only makes sense to perform expensive, large-scale definitive trials on a product that truly represents what will eventually be marketed. This is particularly important with biopharmaceutical products, as manufacturing processes are still considered to have a major effect on final product characteristics. Therefore, more reliance must be placed upon fully validated analytical methods, especially for the release of the final product. The general consensus is that the suitable time for a changeover from qualified methods to validated methods for a biopharmaceutical product is at the Phase IIb stage.

However, certain types of tests should be validated much earlier in the drug development program. If one does a quality risk assessment, one will find that validation would be required of preclinical GLP tests for product safety and toxicity; of tests to demonstrate the absence of contaminating microorganisms or to validate virus removal/inactivation processes; and of good clinical practice (GCP) laboratory analyses of critical factors such as blood levels of the product and its metabolites, or antibody or other surrogate marker levels in blood and tissues.

"Test verification" is a term applied to analytical methods considered already validated. These methods typically appear in compendia, such as the US Pharmacopoeia or the National Formulary. A laboratory need only demonstrate its ability to perform the test according to stated specifications. If standards are available for these tests from bodies such as the National Institute of Standards and Technology or the World Health Organization, these should be used in test verification. Similar criteria can be applied to tests developed in-house that have already been validated in the course of other development work. As a guide, a typical list of types of analytical methods normally expected to be qualified, verified, or validated is shown in Table 2.

Test Method Validation

To paraphrase definitions given in various guideline documents, "Validation involves documenting, through the use of specific laboratory investigations, that performance characteristics of the method are suitable for intended analytical applications and are reliable. The acceptability of analytical data corresponds directly to the criteria used to validate the method."

The critical parameters (or "performance characteristics") for any test method are typically defined by the guidelines documents as follows: :

Sensitivity: The lower limit of detection (LOD). The lowest concentration of an analyte that the analytical procedure can reliably differentiate from background "noise."

Specificity: The ability to assess unequivocally the analyte in the presence of components that may be expected to be present. Typically these might include impurities, degradation products, and matrix.

Precision: The closeness of agreement (degree of scatter) among a series of measurements obtained from multiple sampling of the same homogenous sample under prescribed conditions. Three levels of precision may be considered:

  • Repeatability — precision under the same operating conditions, over a short time period
  • Intermediate precision — within laboratory variations; different days, analysts, equipment, etc.
  • Reproducibility — precision among different laboratories, e.g., by collaborative studies

Accuracy: The degree of closeness of the determined value to the nominal or known true value under prescribed conditions.

Quantification Range: The range of concentration, including the upper limit of quantification (ULOQ) and the lower limit of quantification (LLOQ), that can be reliably and reproducibly quantified with accuracy and precision through use of a concentration-response relationship. Note that LLOQ is not necessarily the same as LOD. It should be understood that the lowest amount of the analyte that may be detected may not be assayed accurately by the test.

Linearity of the Standard Curve: The extent to which the relationship between the experimental response value (or a mathematical manipulation of this) and the concentration of the analyte approximates to a straight line. This relationship may also be called the calibration curve. The linearity of the calibration curve will affect the quantitation range, since departure from linearity in some portion of the graph will affect the reliability of the assay.

Robustness: The ability of the method to deliver accurate, precise results under normal variations of operating conditions. Such variations may include performance by different analysts, the use of reagents from different suppliers, or of reagents with differing storage times. Also, different types of analytical equipment, perhaps with different sensitivity or accuracy, may be in use in the test development laboratory and in those laboratories that will perform routine quality control (QC) testing. The more robust the assay, the less the effect of these factors on its accuracy and precision.

The applicability of the various test parameters defined above to tests for identity, purity, and potency (assays) are defined in Table 3. Note this is an indication, and not an exhaustive list, of the types of analysis that might be applicable to testing and release of a product. ICH Guideline Q2A discusses the validation of the various types of tests listed in Table 3 as "lot release tests."

Physical and Chemical Analyses

The nature of physical and chemical analyses is such that they may be relied upon to be more accurate and reproducible, and therefore easier to validate, than biological analytical methods. Many tests required for evaluation of drug substances and drug products are defined in various pharmacopoeias and national formularies. In addition, ICH has published Q6A and Q6B, which give test procedures and acceptable criteria for new drug substances and products and for biotechnological/biological products, respectively.11,12 These tests are defined by the FDA as "regulatory analytical procedures" and are generally preferred by the agency. Alternative test methods may be used, but they must be demonstrated to be equivalent to or better than the compendial test in every respect. Thus, non-compendial tests present greater validation problems, as the adoption of an analytical method as a regulatory analytical procedure indicates it can be validated by any competent laboratory.

FDA's "Analytical Procedures Validation" document provides guidance on levels of test validation required for IND and NDA submissions and differentiates between compendial and non-compendial tests in the type and amount of documentation required to support any particular test method. It also gives detailed instructions on the approach to be taken in validating particular analytical methods, such as gas chromatography, high- or low-pressure liquid chromatography, or capillary electrophoresis.8

Validation of physical and chemical analytical methods involves a well-characterized procedure, according to an established SOP, using properly qualified and calibrated instruments. The test should be run a sufficient number of times, using a clearly defined and acceptable reference standard and, if necessary, using different analysts, so that a proper statistical analysis can be performed to determine such things as accuracy and precision, as defined for each type of test. For quantitative measurements, statistics should be able to determine the linearity of the observed response, or to indicate the optimum manipulation to achieve linearity, e.g., log/log plotting. Equally important is the determination of the parallelism between the standard curve and those created by test samples. The statistical methods for this test have been described by Kleinbaum, Kupper, and Muller.13


The form of biological test that most often requires validation is that which provides a quantitative estimate of the biological activity or "potency" of a biological or biopharmaceutical product — an assay. The FDA's "Analytical Procedures Validation" does not provide help in this area, as it merely states, "Biological and/or immunochemical assays should be assessed using the same analytical procedures used to determine product potency. These can include animal-based, cell culture-based, biochemical, or ligand/receptor-binding assays. While these tests may be needed for complete characterization of certain reference standards, specific recommendations for validation of biological and immunochemical tests are not contained in this guidance document."8

FDA's later document, "Bioanalytical Method Validation," goes some way toward resolving this problem, although it is specifically designed to assist in the validation of tests used in clinical pharmacology, bioavailability, and bioequivalence studies. The document describes how to validate analyses for quantitative measurement of an analyte in a biological matrix such as blood, plasma, serum, or urine. The critical parameters for this type of validation are, however, similar to those already discussed, namely accuracy, precision, selectivity, sensitivity, reproducibility, and stability. Therefore, biological assays can be approached from the same standpoint as physical or chemical assays, as long as the inherent variability of biological systems is taken into account.9

The "Bioanalytical Method Validation" document lays down some criteria that may be applied to the validation of the most critical assay parameters. It should be noted, however, that in some tests it may not be possible to reach the levels of accuracy and precision specified in this document, while in others it may be possible to set closer limits. The criteria should be test-specific.9

Accuracy is determined by replicate analysis of samples containing known amounts of the analyte. It is recommended that at least five determinations per concentration be done. A minimum of three different concentrations within the range of expected concentration should be tested so that data comprise at least 15 analyses. The mean value should be within 15% of the actual value, except at the lower limit of quantitation, where it should not deviate by more than 20%.

Precision should be measured using a minimum of five determinations on at least three different concentrations, testing multiple aliquots of a single homogenous sample. The precision determined at each concentration should not exceed 15% of the coefficient of variation (CV), except for the LLOQ, where it should not exceed 20%. The same criteria should be applied to each type of measurement of precision (intra-assay, inter-assay).

The following criteria should be applied to the calibration or standard curve:
  • Limit of detection should be at least three times the background or blank response
  • Analyte response at the LLOQ should be at least five times the blank response
  • Analyte (peak) response should be identifiable, discrete, and reproducible with a precision of 20% and accuracy of 80%-120%

The simplest mathematical model that yields a useable relationship between concentration and response should be used.

In developing the standard curve, there must be <20%>

Acceptance criteria should be set for bioanalytical test validation before the series is run. The guidance offers the following acceptance criteria: at least 67% of QC samples should be within 15% of their respective nominal value, and 33% of the QC samples (not all replicates at the same concentration) may be outside the 15% limit. Some tests may require wider limits.

Some small points to consider in performing the replicate runs required to validate a bioassay, using the ubiquitous microplate method, are as follows:

  • Ensure that a certain number of microplates receive samples in reverse order ("front-to-back") or some other arrangement, to limit potential effect of time or position on the plate. Positional effects may be very important. It has been reported that multi-channel pipetting devices, which are often used for plate assays, may show variability between channels in the volume delivered.
  • Perform identical validation runs using a sufficient number of lots of cells, media, standards, etc. to ensure that the test is sufficiently robust to withstand lot-to-lot variation.
  • Ensure that the SOP describing the assay is sufficiently detailed and that it gives unequivocal instructions for procedures that may affect accuracy or precision of the test, e.g., mixing, washing steps.
  • Be certain all reagents can withstand the exposure that occurs during plating out and further manipulation of the microplates during the assay.
  • Check that the plate reader has been adequately qualified for this assay.
  • Record all data directly after reading, and have mathematical manipulations checked by another analyst, or, if they are normally performed by the instrument's software, check this by manual re-calculation.
  • If validation requires evaluation of a number of different test parameters, a "multi-dimensional" test plan can be used.


The validation of analytical test methods, if they are not already described in an approved compendium, can present more problems in biopharmaceutical manufacturing than in chemical drug production. The high dependence of product quality on specific manufacturing processes means that even in-process tests must be accurate and reliable. However, "full" validation may not be required until fairly late in the development process; in fact, it may not be achievable until sufficient experience with the assay has been achieved and adequate data have been collected.

The use of qualified methods is acceptable at earlier development stages, but a similar level of discipline must be applied to qualification of assays as to validation. The basic difference between the two procedures lies in the extent to which critical test parameters are examined and controlled. Most people concerned with analytical procedures prefer to spend a fair amount of time at the qualification stage, so as to determine whether the method will actually be validatable. Then, final validation will depend upon analysis of sufficient test data. The secret to success in this area is "know your assay" — validation depends upon understanding the test process and its potential for variability.

Alex D. Kanarek, Ph.D, is president of Bio-Development Consulting, 38 Colonel Sharpe Crescent, UxBridge, Ontario L9P 1T7, Canada 905.852.5776, Fax: 905.852.5881,


1. ICH Q2A: Validation of Analytical Procedures. Guideline for Industry. US Food and Drug Administration web site. Available at: http://

2. Guidance for Industry: Analytical Procedures and Methods Validation: Chemistry, Manufacturing, and Controls Documentation; Draft Guidance. Rockville, MD: US FDA; 2000. Available at: http://

3. Good Manufacturing Practices (GMP): Quality System (QS) Regulation. US Food and Drug Administration web site. Available at: http://

4. Bioresearch Monitoring, Good Laboratory Practices: GLP References and Guidance.

US Food and Drug Administration web site. Available at: http://

5. Title 21 — Food and Drugs; Subchapter C — Drugs: General Part 211:

Current Good Manufacturing Process for Finished Pharmaceuticals. Subpart I: Laboratory Controls. US Food and Drug Administration web site. Available at: http://

6. "Laboratory Records." Code of Federal Regulations: 21 CFR 211.194(a)(2). National Archives and Records Administration web site. Available at:

7. ICH Q2B: Validation of Analytical Procedures: Methodology. Guideline for Industry. US Food and Drug Administration web site. Available at: http://

8. Guidance for Industry: Analytical Procedures and Methods Validation: Chemistry, Manufacturing, and Controls Documentation. Draft Guidance. US Food and Drug Administration web site. Available at: http://

No comments: