Wednesday, August 25, 2010

Is FDA's Draft Process-Validation Guidance a Mixed Blessing?


The US Food and Drug Administration’s Draft Guidance for Industry—Process Validation: General Principles and Practices provides a life-cycle approach for validating pharmaceutical processes and aims to help pharmaceutical companies achieve consistently high product quality. The document includes several concepts that are familiar to the industry but also contains ambiguities and recommendations that might be difficult for some drugmakers to follow.
The draft guidance suggests manufacturers establish links from their clinical process to their commercial-manufacturing process. This approach is similar to the one FDA has used in its preapproval inspections. If the guidance becomes final as it currently stands, manufacturers may be expected to use the data that they gain during formulation and development to define a product’s critical attributes, which would be the basis for the manufacturing-process parameters.
The agency points out that development and formulation data can improve a company’s understanding of its processes during scale-up and commercial manufacturing. This understanding would help companies control variability and increase product quality, says Chris Ames, director of global validation at Catalent Pharma Solutions (Somerset, NJ). Companies would submit these data to FDA to establish links between clinical and commercial processes.
But the draft guidance does not advise manufacturers about how to identify the most important characteristics of its product or manufacturing process, or about how to demonstrate links from the clinical to commercial process. “They’ve left it completely open to interpretation as to what data you provide and what format you use,” says Jim Agalloco, president of Agalloco and Associates. This ambiguity would suit Big Pharma because it frees companies to use their experience and discretion in deciding how to follow the guidance, says Agalloco. Small and emerging drugmakers, however, would likely be confused because they don’t have the depth of knowledge that would help them define critical attributes.
Some elements of the draft guidance resemble a Six Sigma approach to manufacturing, which is familiar to the pharmaceutical industry. The main similarity is the draft guidance’s recommendation of a statistical link that demonstrates that variability remains constant from the clinical through the commercial manufacturing stages. The statistical link is intended to confirm that processes are the same throughout all phases.
Although the draft guidance suggests statistical analysis, it leaves industry with only a broad understanding of what that means. FDA does not explicitly suggest that manufacturers use particular statistical tools, the agency simply recommends that companies apply good statistics to establish the links, says Agalloco.
The draft guidance suggests manufacturers define a process that can be measured, analyzed, improved, and controlled, and this approach is closely related to Six Sigma. The benefit of the Six Sigma technique is that it provides a mechanism for scientific review of a process, for assessing variability, and for identifying improvements, says Ames.
On the other hand, it is unclear whether the draft guidance recommends a product be refined in the way that a Six Sigma approach would. “To me, Six Sigma implies an acceptance by FDA that you might not have done a sufficient job in development and scale-up and are allowed to improve product and process while it is in operation,” says Agalloco. Patients’ experiences with a product might persuade a manufacturer that it should adjust one of the drug’s parameters to improve it. Six Sigma would allow postcommercialization changes to a product, but the draft guidance may not be compatible with them, Agalloco says.
Before it could submit a regulatory filing, a company would have to spend a great deal of time and money to better understand its ingredients, its product, its manufacturing process, its material handling, and associated variables. Pharmaceutical companies might object to the draft guidance’s approach because it suggests this expensive work be completed before commercialization, but the costs would not be recoverable before commercial-scale manufacturing began.
Although it is based on good science, if the final guidance is approved as drafted, it could easily increase drug-development time by one or two years, thus costing a manufacturer millions of dollars, says Warren Charlton, a consultant at WHC Bio Pharma Technical Services. Manufacturers would need to use smart strategies to shorten development time, but not knowing how much data regulators expect in a submission would make this strategy difficult.
The draft guidance inspired a huge volume of comments that will likely take FDA a long time to review, says Agalloco. Even though the guidance might not be final for at least a year, manufacturers would be wise to study it now and seek advice about interpreting it. In this difficult time for the pharmaceutical industry, no company can afford to ignore regulators’ recommendations, and advance preparation would be to a manufacturer’s benefit.

Contractee Responsibilities in Outsourced Pharmaceutical Quality Control Testing

aMethod Qualification (Verification and/or Validation)Once a contract testing laboratory has been given “approved” status, the actual methods to be used may be qualified for their purpose. If the contractor has a desired compendial method in place, the contractee is responsible for providing a sample of test material for verification according to compendial requirements. The US Pharmacopeia includes guidance on verification of compendial procedures (16). For noncompendial lot-release methods (e.g., viral and mycoplasma testing, which are driven by US FDA points to consider and ICH guidance documents), the methods will require validation if used for GMP purposes.
Validation is normally performed by a contract testing organization with a generic sample matrix, so it should not be confused with the product-specific qualification described below. Validation of compendial procedures is outlined in the US Pharmacopeia (17) as well as the International Conference on Harmonisation of Technical Requirements for the Registration of Pharmaceuticals for Human Use (ICH) (18). A contractee may wish to confirm the status of a given method by evaluating validation reports during an on-site visit. Contract testing labs may also be willing to provide copies of method validation reports or summary documents describing those results.
Commercial product lot release testing is expected to be performed using methods deemed suitable for a given product. This typically entails product qualification for the analytical method and evaluation of possible matrix interference. The contractee is responsible for commissioning such studies before routine use of any method and should maintain the resulting reports as evidence that the methods being used are suitable for their purpose. Such studies may need to be repeated if the processes involved in manufacturing a commercial product are modified significantly.
Quality and Business Agreements Regulators expect that the relationship between a product sponsor and its contract testing partner be formalized by a quality agreement (7,8). Such agreements specify explicitly the responsibilities of each partner and provide the means by which a contractee extends its QC testing standards to a contractor. The agreement should list technical and/or quality contacts (names and phone numbers) at the contract testing organization as well as contact information for appropriate decision makers at the contractee organization.
Among the specifics to be detailed in a quality agreement are
  • the types of compliance to be followed in contracted studies
  • details on interactions between the contractor's and the contractee's quality systems
  • requirements for equipment and assay validation, verification, and/or qualification
  • assurance of quality and reporting requirements
  • data recording and archiving practices
  • conduct of investigations into nonconformances, deviations, unexpected, and OOS results (and timing of client notification)
  • notification of regulatory inspection
  • requirements for use of subcontracting laboratories
  • use of debarred personnel
  • availability of the contract testing laboratory for periodic technical and compliance as well as for-cause audits.
Business requirements should be addressed in a separate document and may take the form of a master service agreement, an agreement of scope, a pricing agreement, or a standard terms and conditions agreement supplied by the contractor. Such considerations may include assay pricing (including discount structures), assay initiation and report turn-around times, testing volume and exclusivity, availability of rush service and associated charges, and (in some cases) penalties for late reporting. These business agreements often specify the requirements of a contract testing laboratory with respect to contractee notification of impending sample submission, which are detailed below.

Thursday, August 19, 2010

A Path to Quality and Compliance


Compliance with quality regulations that protect patients' safety is a critical requirement for the pharmaceutical industry. Regulatory compliance goes a long way toward proving that a given product's target for quality has been achieved and documented. Pharmaceutical Manufacturing Handbook: Regulations and Quality is intended to help readers understand how to comply with regulations and how to adapt a quality unit's routine activities to facilitate compliance.
In the book's preface, editor Shayne Cox Gad says the book describes "all regulatory aspects and requirements that govern how drugs are produced for evaluation (and, later, sale to and use) in humans." The book's back cover says it contains "everything you need to ensure full compliance and superior quality control."
The book's eight sections cover topics such as good manufacturing practices (GMPs) and other US Food and Drug Administration guidelines, international GMP regulations, quality, process analytical technology (PAT), personnel, contamination and contamination control, drug stability, and validation.
The book benefits from the work of more than 40 authors—respected academics and industry veterans—who contributed their years of expertise in various topics.

Pharmaceutical Manufacturing Handbook: Regulations and Quality, Shayne Cox Gad, Ed., Wiley, London, 2008, 856 pp., ISBN: 978-0- 470-25959-7
The book's first two sections are devoted to US and international regulation of GMPs. The first section describes FDA's GMP regulation and provides a good reference to other pertinent regulations and guidelines such as scale-up, postapproval changes, and PAT publications. These sections provide useful advice for complying with regulations. A chapter titled "Enforcement of Current Good Manufacturing Practices" includes a surprising and illuminating discussion about FDA's collaboration with the Federal Bureau of Investigations during certain inspections.
The next section discusses various aspects of quality. Despite offering detailed discussions, chapters about total quality management, the role of quality systems and audits, creating and overseeing quality-management systems, and quality-process improvements are quite easy to read. These chapters also provide materials that can be used in a quality operation (e.g., a checklist for performing a quality audit).
The section dedicated to PAT offers a discussion about chemical imaging and chemometrics. Passages provide details about the background of PAT, its benefits, and the various methods of implementing PAT.
The comprehensive section about drug stability provides readers a scientific understanding of how to determine a product's shelf life. This section discusses alternative accelerated testing methods through variable-parameter kinetics studies.
The section about validation includes considerations of essential matters such as analytical methods, laboratory instruments, and pharmaceutical manufacturing. This portion omits other important types of validation, however, such as computer validation, process validation, and facility validation.
Nevertheless, this book is a valuable reference for anyone interested in drug stability, quality, and PAT. The contributors' expertise ensures that the book shines in its treatment of those topics. It is not far off the mark to say that the book contains "everything you need to ensure full compliance and superior quality control."

Coming to Terms with Compliance1


Drug manufacturers are under increasing pressure to bring products to the market faster and more cost-effectively while simultaneously meeting stringent quality requirements. Changing regulatory environments make the task of monitoring and adhering to quality standards challenging -- but the costs of non-compliance are high. Failing to comply with and satisfy the demands of regulations can result in heavy fines, product recall and in some cases, plant closure.

VMP programme checklist
Before a drug can be marketed, it must gain approval from regulatory authorities such as the US Food and Drug Administration (FDA) and/or the European Agency for the Evaluation of Medicinal Products (EMEA). A company applying for marketing approval must demonstrate that the drug has been produced according to strictly controlled and validated procedures, so ensuring the safety and consistent quality of the product. According to FDA, process validation is defined as: "Establishing documented evidence, which provides a high degree of assurance that a specific process will consistently produce a product meeting the predetermined specifications and quality attributes." Regulatory compliance is necessary at all levels of the drug discovery and development chain, encompassing areas such as good laboratory practice (GLP), good manufacturing practice (GMP), and good clinical practice (GCP). GLP focusses on the in vitro and in vivo evaluation of toxicological safety, with an emphasis on anticipating safety issues for clinical evaluations. GCP requires the evaluation of both product efficacy and safety in a clinical context, whereas GMP focusses on the quality evaluation of the manufactured product.
The areas requiring regulatory compliance cover an extremely broad spectrum. Existing regulations are periodically updated and revised, and are stringently enforced. Drug manufacturers, therefore, must keep abreast of regulatory developments - even unintentional non-compliance can potentially cost millions in fines and disruption to operations. The costs of non-compliance A regulatory authority can impose routine inspections, mandatory alterations in procedures, forced closure and even criminal prosecution on companies failing to comply with regulations. During the past 5 years, there has been an increase in the number of consent decrees (legal agreements to settle disputes with FDA) in the US, which can incur costs of tens of millions of dollars.
A dispute in 1999 involving a high profile company resulted in a $100 million fine for failing to correct defects in its manufacturing processes despite 6 years of warnings (Washington Post 03/11/99).

Vendor qualification checklist
Warning letters from FDA to companies violating regulations, which are publicly displayed on FDA's website, are another cost that severely damages a company's reputation.The language is unequivocal and plainly states how a company has failed to meet regulatory requirements (www.fda.gov/foi/warning.htm). The ultimate cost, however, to those that fail to meet regulatory requirements is that potential revenue from a product will be lost, jeopardizing returns on investment.
Maintaining compliance Regulatory compliance is not a one-off procedure and should be an integral part of an organization; companies must take into account compliance. This may involve devising an ongoing management process that includes:
  • company-specific interpretation of current regulations
  • creation of a systems inventory
  • identification of systems that do/do not comply
  • a detailed assessment of any gaps in compliance
  • development of an active implementation plan, describing the necessary corrective actions required to bring systems into compliance
  • prioritization of systems that need to be upgraded
  • ensuring compliance of systems according to a prioritized list
  • ensuring documentation is in place, archived and properly maintained.

Validation master plan. To avoid unnecessary work and obtain a good overview of the entire project, the plan should not repeat any information that is available elsewhere, for example, in SOPs. Rather, it should refer to established documentation. Regularly updating and reviewing template documents is required to make sure that all the latest regulations are incorporated with newly introduced company policies. 

Selecting a freeze dryer 2

Validation and compliance
The most current demands on freeze dryers are validation and the ability to be 21 CFR Part 11compliant. These factors are a significant part of the cost of pharmaceutical processing freeze dryers.
For validation, a full component catalogue must be supplied. An installation qualification, operational qualification (IQOQ) document is generated that outlines the proper validation process, and a factory acceptance test (FAT) and site acceptance test (SAT) are implemented to verify that the system is supplied as ordered and performs within the required specifications.
To be 21 CFR Part 11compliant, the freezedryer software must encrypt all data to prevent tampering, and must log every change and entry on the computer control system using user log ins and password protection. Misconceptions
The most common oversight is the concept that "all freeze dryers are the same". The choice of components, materials, construction and instrumentation create a wide variety in cost versus performance.
Older systems tend to have undersized compressors/condensers, as well as restrictions between the product chamber and the condenser, which limits the rate of freeze drying and often causes the freezedrying process to be extended. Today, freeze dryers are much better designed to accommodate the maximum load that may be placed in the system and the freeze dryer is not the limit to the process. For example, compressor reliability has significantly improved during the last 15 years. In small freeze dryers, the use of scroll compressors has virtually eliminated the failures common with reciprocating compressors.
As there are so many possible variations in size and features, advanced freeze dryers are 'built to order' where the end user works with the manufacturer to obtain a system suitable for their application requirements.
Innovations in freeze dryer technology
Thermal analysis and freeze drying microscopes have helped improve the understanding of the critical temperature — the temperature at which the product may collapse or melt-back — of the product being freeze dried. This knowledge provides the information required to produce a robust and efficient freeze drying cycle.
Classic freeze drying control is open loop, where the shelf temperature and chamber pressure are controlled based on a predetermined profile. It is assumed that the temperature of the product stays below its critical temperature, and the result is a reproducible — but very conservative and long — freezedrying cycle.
Closed loop control of the shelf temperature is required to both prevent collapse and minimise the length of the freezedrying cycle. The latest control systems use critical temperature information to dynamically control shelf temperature, which both protects against collapse and meltback whilst optimising the freezedrying cycle.
Methods that use an average measurement of the product in the chamber, such as calculated via pressure rise testing, adjust the temperature of the shelves a few times throughout the first half of the primary drying process. This process is limited to the first half of primary drying and only provides a conservative protocol. However, it is not optimised and does not take into account variations inside the chamber.
The latest control systems take into account both average and specific measurements to ensure there is no melt back and constantly control the shelf temperature throughout the entire cycle to produce a userselectable conservative or aggressive protocol.
In the future, more advanced closed loop control systems will be available that offer improved process control. Today, most applicable measuring instruments, such as tunable diode laser absorption spectroscopy, near infrared and mass spectrometry, are expensive and provide only marginal process improvement, which means they are not economically feasible for process control. As instrumentation and techniques advance, they will be incorporated into real-time process control systems.

Selecting a freeze dryer 1

Pharmaceutical Technology Europe
The most important consideration when choosing a freeze dryer is to ensure the system is fit for both today's applications and future needs. For the sake of this discussion, we will focus on freeze dryers with fluidfilled shelves, which does not include lowend manifold or heatonly shelf freeze dryers.
An understanding of the available features of a freeze dryer can facilitate the choice. The following are some considerations:
  • shelf size (m2 )
  • shelf style (bulk or stoppering)
  • condensing rate (L/h)
  • condensing capacity (L)
  • condenser location (internal versus external)
  • material (304 versus 316SS)
  • 21 CFR Part 11 compliance.

The freeze dryer manufacturer will also want to know what space is available for the freeze dryer and what utilities, such as electrical, air, chilled water and air conditioning, are available.
The following are a few examples of some of the various options:
  • Cylindrical or rectangular product chambers — a cylindrical product chamber is less expensive than a rectangular chamber; however, it may occupy more floor space depending on the configuration of the shelf assembly.
  • Internal or external condenser — an internal condenser is cheaper and provides unrestricted vapour flow. An external condenser is supplied with an isolation valve to separate the product from the condenser, which protects the product from reconstitution during power loss, and keeps the condensate out of the clean room environment.
  • Pirani or capacitance manometer — piranis, the least expensive vacuum measurement device, read the relative vacuum inside a freeze dryer because they are affected by vapour. The more vapour present, the higher the pressure reading. A capacitance manometer reads the absolute vacuum level and the reading is unaffected by vapour pressure. Most production systems use a capacitance manometer for measurement and control of vacuum level. The best method for determining the 'end of primary drying' is to compare a pirani reading to a capacitance manometer reading. When they read the same, there is no vapour present and the product is dry. A quick test can be conducted by lowering the vacuum level to see if the pirani reading tracks the capacitance manometer. If water is present, the capacitance manometer will drop faster. If no water is present, they will drop at the same rate.
  • Proportional vacuum control — the least expensive vacuum control system bleeds gas into the chamber using a solenoid valve, providing ±10 mT stability at 100 mT. For better stability, a proportional vacuum controller can be used that regulates the gas bleed through a proportional valve. The result is ±0.5 mT or better control.
Main freeze drying categories
Freeze dryer selection falls into two main categories: laboratory versus production, and non-sterile versus sterile.
Laboratory freeze dryers are used for a large variety of applications, including removal of solvent from a material, Phase I clinical trials and protocol development for scaleup production. A typical laboratory system will have a shelf area of 0.1–1 m2 and a condensing capacity of up to 30 L.
Laboratorystyle systems can be simple freeze dryers with only standard features, such as a pirani gauge for vacuum level measurement and thermocouples for temperature monitoring, or they can incorporate more advanced instrumentation:
  • capacitance manometer for vacuum measurement
  • proportional vacuum control for fine vacuum control
  • isolation valve between the product chamber and condenser for pressure rise testing
  • liquid nitrogen traps for organic solvent trapping
  • additional product thermocouples for monitoring product temperature.

Pilot and production systems offer shelf areas from 1 m2 up to more than 40 m2 . Production systems are used for Phases II and III clinical trials, and tend to be used for the same or a limited number of products in high-volume production. Recently, there has been a shift from using 10–50-mL vials, to 2mL and 5mL vials for smaller volume, highpotency biotech and proteinrelated products. The result is smaller freeze dryers with expensive payloads.
The type of processing will determine whether stoppering is required. Bulk applications can have fixedinplace shelves, but vial applications require stoppering where the shelves move and are squeezed together to press the partially inserted stoppers into the vial.
Pharmaceutical and other applications may also need to be sterilised between cycles, which can add significant complications and costs to a freeze dryer. A freeze dryer is normally rated for vacuum and the most common method of sterilisation is pressurized steam, which requires the freezedryer chambers to be certified pressure vessels rated to 2 atm at 131 °C.
An alternative sterilisation technique, which is growing in popularity for laboratory and small production systems, uses hydrogen peroxide (H202). H202 does not require a pressurerated vessel, which helps to minimise costs.

Using a Delphi Survey to Assess the Value of Pharmaceutical Process Validation Part 1: Survey Methodology 3

Methodological results Response rate and expert demography. Of the total 73 used e-mail addresses, 36 experts' responses to Q1 were received, 28 of whom continued to Q2. Thus, the response rates were 49% and 38% respectively, for Q1 and Q2.

Table I: Visits (hits) on the information pages during the survey.
Some of the experts' demography can be seen in Figures 1–5, which show a comprehensive variation in their background. There were participants from Finland, Denmark, France, Germany, the UK, Norway, Sweden, Belgium, Iceland and Switzerland; however, the number of participants from each country was not equal. Activity reports Throughout the survey, the level of activity from the participants varied. During Q1, 19% wrote extra comments in their answers, but in Q2, 46% wanted to define their opinions and, therefore, expanded on their answers.
Some participants experienced technical problems and requested an extension of the deadline. At the beginning of the study, it became apparent that entering the Extranet pages was not possible for all participants because of problems with firewalls or browsers. For these individuals, an HTML alternative was provided. Eight respondents used this alternative in Q1 and three in Q2. At the start of the survey, one participant initiated an Extranet discussion, however, no one replied. Thirty nine per cent had visited the discussion page, but only 8% had read all the comments on that page. No one took part in the two organized online forums.
From the system report, it could be seen that 69% of the participants had visited the Extranet pages not only to fill in the questionnaires, but also to check the other information available on the pages (Figure 6). The average time to fill in the questionnaire was approximately 5 min for background information, 21 min for Q1 and 41 min for Q2.
Respondent feedback At the end of the survey, respondents were offered the chance to provide feedback on the survey through an anonymous evaluation form. Only four participants returned the form, all of whom found the subject of the survey interesting, and three out of the four found the methodology suitable for the survey. No one found the Internet technology difficult to use. All found the instructions clear, and that the survey matched their expectations. Two had participated because of interest in the subject, and three out of four gave lack of time as a reason for not participating in discussion and the online forum.
Discussion Altogether, the methodology worked fairly well for this type of opinion survey. The number of respondents and their written comments indicated that most welcomed the opportunity to express their views. The Delphi method fulfilled the expectations and was the appropriate tool for contacting experts anonymously.
The Extranet homepages of the WebCT functioned satisfactorily for the survey. All the necessary information could be offered in an illustrative format, and the completion and sending of the questionnaires was simple. However, as WebCT is mainly provided for the education market, some unnecessary instructions and numberings could not be deleted or changed, but according to the respondents' feedback, these minor issues did not cause the participants difficulties. The major challenges of the Extranet were the firewall and browser problems, which should, of course, have been eliminated beforehand for all participants. Because these problems arose unexpectedly, the only solution was to offer HTML, which meant missing all of the other information given on the Internet pages. Apparently, many lost interest because of this and the majority of the respondents who used the hyperlink in Q1 discontinued the survey in Q2. The opportunity for discussion and online debate was not utilized even though the availability of these functions was highly underlined.
The biggest problem to overcome was the participants' lack of time. This reason for not participating was given mainly by the representatives from the pharmaceutical industry. The authorities were mostly willing to participate, but, only one or two participants were gathered from each country. The pharmaceutical schools found the subject interesting, although some doubted their suitability.
As previously mentioned, using e-mail to contact potential participants was insufficient. Of course, some of those who did not participate may also have considered themselves not to be experts in the field and, therefore, excluded themselves. Other probable reasons for not participating after receiving the e-mail request may be because not all people are fully familiar with electronic communication, and the explosion of the quantity of information through the Internet and e-mail has caused a need for filtering information. The latter may be one reason for the reported lower response rates for e-mail Delphi surveys compared with the postal versions. However, in this survey, more participants were willing to continue to Q2 than in many comparable surveys in which the response rates normally fall dramatically in the second and subsequent rounds.
A group size of approximately 30 proved satisfactory to gather overall information regarding pharmaceutical process validation opinions. The group cannot be regarded as very homogeneous because it consisted of experts from 10 different European countries from the three different parties. Thus, the group was representative of the expert population despite the limited total number of participants. As can be seen from Figure 2, all the participants, with one exception, reported that they practiced, taught or controlled process validation in their work, and they can, therefore, be regarded as experts. It is important to note that above a certain threshold, the inclusion of more respondents only contributes to marginal statistical and qualitative improvements. 
Conclusion The Delphi method was found to be a suitable tool for measuring opinion in the pharmaceutical field. It is particularly useful in the pharma-ceutical manufacturing sector where the discussion between the regulated industry and the regulators is often difficult to achieve on a "neutral" basis in face-to-face-meetings. Because of this gap between the two parties, many regulations are accepted by the industry without official criticism and real assessment, and as a consequence, a lot of unnecessary work is performed. The Delphi method offers a perfect tool for this type of situation - it can be organized anonymously and can bring together geographically dispersed experts.
The use of the Internet and electronic communication gives the method clear advantages - the survey can be organized much faster, the group size can be easily increased and a lot of supporting information can be provided. However, in a climate where the quantity of electronic information is ever increasing, there is a high chance that some may be in part ignored; this is a threat to the use of electronic communication. For this reason, the Internet Delphi demands high motivation of the participants; ideally, the method can then be used in situations where the participants clearly see the advantages of participating, and where they can be entitled to participate without the fear of time constraints. If these prerequisites can be granted, the Internet Delphi can be used for systematic assessment of any kind of new technology or methodology in pharmaceutical manufacturing or pharmaceutical quality assurance. There should, however, always be available at least one independent, neutral person to serve as a reporter between the rounds and after the survey. Furthermore, the possible firewall and browser problems have to be taken into consideration before the start of the survey.

Using a Delphi Survey to Assess the Value of Pharmaceutical Process Validation Part 1: Survey Methodology 2

Given the pharmaceutical industry's sensitivity concerning knowledge sharing and to get both the regulating authorities and the regulated industry involved in the survey, anonymity was regarded as essential.

Figure 3 and Figure 4.
The search for participants took 5 months and was the most challenging part of the survey, particularly contacting industry experts because their e-mail addresses could not be located. To find suitable people from the pharmaceutical industry, a snowball method was used.23 Additionally, addresses for quality assurance, product development and production experts were requested from the representatives of appropriate foreign companies in Finland and from the qualified persons of appropriate domestic companies. However, only a few participants from the foreign companies participated because the representatives did not know who the right people were to ask, or the company refused to participate because of time constraints. The only effective way of obtaining willing participants was finally found to be direct telephone contact; this method also worked well for all three pharmaceutical fields. The size of the expert group is also important to the outcome of a Delphi study, but it depends on the homogeneity of the expert population and whether the study searches for qualitative or quantitative results.23 Other Delphi surveys have varied in size from 10–15 up to 2000–3000.23,24 For this survey, the number of experts was limited and the study mainly searched for qualitative results. Thus, a group of approximately 30–50 participants was considered apposite.
The questionnaires An extremely important part of the Delphi method is the questionnaires, particularly the first round questionnaire (Q1). Q1 needs to be easy and clear, to motivate and encourage the respondents; otherwise they may lose interest. The design of the questionnaires was carefully discussed by the advisory group before the start. The objective was to form easy-to-use and fast-to-complete questionnaires. It was agreed to keep the number of questions to a minimum and to limit the use of open-ended questions. Not only would these measures avoid making the survey too time consuming, but they would deter those who would use these factors as reasons for abandoning the survey. The use of open-ended questions is, though, often found necessary to eliminate the possible external bias linked to the guiding role of the investigator.7 Instead of open-ended questions, the external bias was eliminated using a mixture of negatively and positively influenced arguments. The questionnaires were also pre-tested with a pilot study among the advisory group and external experts.
Although the aim was to identify the attitude on the usefulness of pharmaceutical process validation, a further aim was to establish whether there are differences in attitudes between European countries or the different parties.
Operationalization of the subject was done under five headings:
  • How do you feel about process validation?
  • What are the benefits of process validation?
  • What are the negative aspects of process validation?
  • How can we make process validation easier and more effective to get the most from it?
  • What hinders positive thinking on process validation?

The first three questions were designed to measure the overall attitude towards process validation and the final two were mostly determined the reasons underlying the attitudes, and also attempted to find out if some specific tools of process validation were known among the participants.
The initial round started with a background information questionnaire, which also included one final question estimating the overall opinion on process validation. This questionnaire was to be completed before commencing Q1, and thus, the last question served as a control, measuring the attitude at the beginning of the study. The same question was repeated at the end of the second round questionnaire (Q2).

Figure 5: Pharmaceutical participant demography: gender, education and age.
Q1 consisted of 31 questions, most of which were multiple-choice. Only three of the questions were open-ended to increase the inclination and quality of personal expression. Q2 included some new or modified questions, but most were repeated in their original form together with a summary of the answers from Q1. In Q2, the opportunity to add comments was provided after every question and was encouraged by offering participants the option to answer in their native language. Also in this round, the questions were posed in a slightly different order, with the addition of one new heading - "Cost of validation." In Q1, questions concerning the cost of validation had been spread under other headings.
The Internet as an environment for the survey The WebCT (WebCT, Inc., Lynnefield, Massachusetts, USA) learning environment26 was chosen as the platform for the survey because it was already being used in Helsinki University and offered the required tools. Extranet homepages were constructed and the questionnaires, together with supporting communication tools and information pages, were available from the Internet public page. The participants were e-mailed the passwords required to access these pages.
Guidance and other information, including definitions of the critical terms, were offered on the homepages and an online forum was available for anonymous discussion. The discussion area was continuously available and an online forum was organized twice during the survey - before and after Q2. The surveys' Internet public homepage can still be viewed.

Using a Delphi Survey to Assess the Value of Pharmaceutical Process Validation Part 1: Survey Methodology 1

Despite the long history of pharmaceutical validation, process validation in pharmaceutical manufacturing continues to be topical. European regulations regarding process validation were renewed in autumn 2001, which again brought the subject under the spotlight. Many people working in pharmaceutical production are now reviewing the state of their compliance practices and posing the question: "How will we benefit from process validation?"
To estimate the value of process validation, a systematic evaluation of the collected opinions and experiences of it was performed using technology assessment (TA). TA is an evaluation process that aims to protect people/society from the consequences of rapid technological developments and attempts to identify all the possible impacts of a technology, not just the intended ones.1 Today, technology in this context refers not only to the logical products of science, but also to the attitudes, processes, apparatus and consequences associated with it, and in that wider meaning, the principles of TA are well suited to the evaluation of process validation as a tool for pharmaceutical manufacturing.

Changes in the Delphi survey
There are a number of different methods of TA; this study used the synthesis (the compilation and evaluation of all available knowledge1) method. Initially, a literature search was performed,2 which revealed a lack of European experts' comments. Therefore, during autumn 2001, an experimental survey was conducted amongst European experts in pharmaceutical fields of manufacturing, regulation and academia to unearth their opinions. Objectives The main objectives of the study were to explore the value of process validation and to ascertain the best tools to perform it. Additionally, the study provided an opportunity to test how the principles and methods of TA could be used in the field of pharmaceutical quality assurance. Methodology Given that there is no single solution to process validation and that the value of process validation cannot be evaluated by using solid empirical measurement, but rather by informed judgement, 3 a discussion group was found to be an effective method for collecting information. For this reason, the Delphi method was chosen.
Delphi method. The procedure used in the Delphi method aims at structuring and distilling the mass of information from a selected group of experts by means of a series of questionnaires based on a structured process with controlled feedback.4 Moreover, this method was chosen for the following benefits:
  • it enables participants from various countries and different fields (industry, authorities, schools) to take part
  • it allows anonymous participation; a benefit that was of special value because the survey intended to facilitate discussion between the industry and its authorities, and to obtain comments from different organizational levels
  • participants can take part asynchronously; that is, one may choose when to participate
  • participants can choose to contribute to areas in which they are best qualified.

For development in the Delphi survey, see sidebar "Changes in the Delphi survey."

Figure 1 and Figure 2.
The Internet and e-mail. To accelerate communication, the Internet and e-mail were used. Using the Internet was also beneficial because it offered supporting tools for group communication, such as the potential for online discussion.6 Furthermore, the Internet provides a better and more illustrative means of informing the participants of the survey's key elements. Principles of the Delphi technique Although TA synthesis methods are frequently used to predict future scenarios, they can also be employed to critically examine the state-of-the-art of a given field.1,7 One of the most popular tools of synthesis is the Delphi technique. The aim of most Delphi techniques is the reliable and creative exploration of ideas or the production of suitable information for decision-making.
The replies to one round of questions are summarized and used to construct the next questionnaire. This reiterative process is continued until consensus or clear disagreement is reached among the participants.
Locating experts One disadvantage of the Delphi method is the definition and selection of experts; that is, whom to regard as an expert and how to create a representative group.
For this survey, experts were defined as those people working on process validation in:
  • the pharmaceutical industry
  • pharmaceutical authorities
  • pharmaceutical schools
  • consultant companies.

Representatives of the pharmaceutical industry were chosen from quality assurance, production and product development positions; representatives of the authorities had to either evaluate pharmaceutical and chemical aspects of the marketing authorization applications or work in inspection; and the representatives of pharmaceutical education had to teach process validation. Further details were unspecified; educational background was disregarded and the objective was to obtain the widest possible representation of different organizational levels. The level of experience of pharmaceutical process validation would have been of interest, but because of the limited number of experts in the field, this issue was not used as an exclusion criterion.

Wednesday, August 18, 2010

Wireless Technology Reduces Compliance Costs

Life-science companies struggle to reduce their products’ total cost of quality (TCQ) and maintain high levels of regulatory compliance, product quality, and customer service. Several aspects of pharmaceutical operations such as regulatory compliance and validating manufacturing processes are often costly. Manufacturing products according to the process definition and within critical-to-quality (CTQ) values while documenting compliance also entails much expense. Manufacturers in the life sciences are increasingly implementing wireless technologies to reduce the overall cost of compliance.
Wireless technologies are not the same radio-frequency devices that were implemented in the 1980s for inventory management. Today, various wireless products are available. Companies are applying these products in innovative ways and, in some cases, achieving large cost savings. Most recently, manufacturers have successfully applied wireless process transmitters and wireless instrumentation to controls and distributed control systems (DCS) solutions that address validation and safety concerns.
For example, one company had to validate and continuously monitor critical quality parameters in an alcohol tank farm. Because of the concentration of the liquid, the environment was harmful to standard, hard-wired instrumentation. The environment also presented safety and health hazards to the personnel performing the validation and monitoring.
The company deployed wireless-based process instrumentation to continuously monitor the temperature, pressure, volume, and specific gravity of the fluid in the tank. Connecting wireless instrumentation to the tank farm’s DCS reduced the cost of validation by 50% and the overall cost of continuous monitoring by 30%. This arrangement also shielded plant personnel from the safety hazards.
Together with web-based manufacturing controls and systems, wireless technology helps manufacturing and quality personnel operate equipment remotely. This flexibility is achieved using various mobile wireless terminals (e.g., palm devices), pentablets (i.e., handheld computers such as Motion Computing’s “F5” tablet), and convertibles (i.e., laptops such as Panasonic’s rugged “Toughbook” device).
Often, paper-based validation protocols may be adulterated with chemicals and process materials when initiated and completed near process equipment. In some cases, portions of the paper protocol are lost or destroyed, and the protocol must be executed again. But paperless engineering protocols can be initiated at the point of validation. In a paperless approach, the protocols are initiated electronically to capture the execution data and signatures at the point of validation. Rugged wireless terminals manage the electronic protocol. For life-science companies that adopt this approach, the cost savings associated with validation are significant.
A major factor that increases TCQ is the cost to document evidence of manufacturing compliance. Many life-science companies are moving to a paperless batch record that enforces execution decisions and workflow during manufacturing. Using rugged wireless terminals in manufacturing to capture documented evidence where events occur is an easy way to establish a paperless EBR (electronic batch record). The EBR becomes the foundation for a release-by-exception strategy and results in significant savings in overall TCQ.
Data security is critically important to life-science companies. It must be investigated by manufacturers who are considering wireless technology. Wireless transmission makes it easier for unauthorized people to intercept sensitive data. Encryption is the only way to protect data. Because security is such a significant concern to life-science companies, companies such as Cisco Systems (San Jose, CA) are aggressively seeking to embed encryption directly into their wireless technology.
In conclusion, adopting wireless process instrumentation and wireless terminal technology for pharmaceutical manufacturing directly leads to efficient process and equipment validation, consistent manufacturing compliance, improved safety for plant personnel, and an overall reduction in TCQ.
Mike Power is a life-science supply-chain manager at BearingPoint (Mclean, VA)

Reducing the risk of microbial contamination


Maintaining asepsis and sterility is the primary challenge to implementing aseptic techniques and sterile processes in biopharmaceutical manufacture. Efforts to reduce the risk of microbial contamination of aseptically filled biotech products beyond their already low level represent engineering challenges, that don’t really compliment the progress in biotech R&D.
Process improvements
Recent innovations in aseptic techniques and sterilisation in biopharmaceutical manufacturing that have been key to improving the process include the introduction of single-use disposable filtration and filling systems composed of large-scale sterilising filter capsules, polymeric biocontainers, tubing and sterile connectors. These pre‑assembled systems — complete with validation and other documentation to meet regulatory and industry requirements — are supplied pre‑sterilised by gamma irradiation, reducing both the number and risk of aseptic connections, as well as eliminating the user’s sterilisation and validation requirements. Closed pre‑sterilised systems provide higher assurance of maintaining sterility and cleanliness of fluid pathways up to the point of filling.
Quality control and compliance
In an industry where patient safety is paramount, regulatory changes have kept aseptic manufacturers on their toes. The latest revision of EU GMP Annex 1 (March 2009) calls for bioburden to be determined prior to the sterilisation for every batch of aseptically filled product. Traditional pharmaceutical microbiology quality control methods require at least 3–5 days to quantify microbial levels, so results are only available after processing. Conducting such tests on every batch is considered burdensome. The development of rapid microbiology technologies, such as adenosine triphosphate (ATP) bioluminescence and PCR nucleic acid‑based tests, however, makes such bioburden monitoring of pre‑sterilisation feeds, intermediates and raw materials feasible with minimal cost and labour, providing results within 24 h versus several days. Regulators in Europe and the US have encouraged the implementation of rapid microbiology methods and are working to ease regulatory validation requirements to facilitate broader implementation.
The future of aseptic techniques in bioprocessing
The increased use of preassembled, presterilised single-use filtration and filling systems to minimise aseptic connections and sterilisation will probably feature in the future of bioprocessing. More final filling will be done in isolators under robotic control to remove operator involvement — the primary source of microbial contamination of aseptically filled products. Rapid microbiology methods will be incorporated for liquid feed and environmental monitoring. While 0.2 μm rated filters will continue to be the most widely used for the sterilisation of aseptically filled sterile drug products, high capacity 0.1 μm rated sterilising filters will be increasingly used for soy‑based media sterilisation in cell culture and aseptic fill validation to ensure absence of contaminating mycoplasma (e.g., Acholeplasma laidlawii). Although these technologies already exist they are not yet widely implemented. Innovations are needed to increase ease of use, reduce cost of installation and validation, and increase regulatory familiarity and acceptance. Improvements will be seen that further reduce media fill and product sterility test failures, product recalls and regulatory actions for insufficient sterility or sterilisation validation.

Dispelling Cleaning Validation Myths: Part I C

Somehow, nonspecific methods are viewed as less robust than specific methods. In fact, for cleaning validation, using a method such as TOC actually makes it more difficult for a manufacturer to meet its cleaning validation acceptance limits (again provided the limits are set correctly and the TOC data are converted appropriately into the target residue).4 If such methods were unacceptable, almost all biotechnology facilities would be shut down because TOC is used widely in the industry for measuring residues of the actives (in biotech, TOC is usually measuring degraded actives, but the measured TOC is expressed as if it were the undegraded active).
The use of TOC is further supported by a Human Drug Current Good Manufacturing Practice (cGMP) Note from FDA in which it states: "We think TOC or TC can be an acceptable method for monitoring residues routinely and for cleaning validation."5 The cGMP note was replicated as a "Q&A for cGMP for Drugs" in 2002.6 FDA goes on to state the conditions that should be adhered to if TOC is used as the analytical method. However, the implication is that such methods are acceptable if used correctly.
Where did the myth come from? My speculation is that it came from using TOC as an analytical method, but only setting limits based on compendia water specifications (that is, 500 ppb TOC). It should be clear from the Myth 1 discussion that in this case TOC is an unacceptable method (correctly stated, it is the limit setting that is unacceptable, but it is easy to see how this became "TOC is unacceptable"). This is further complicated by a statement in the PIC/S guidance document that analytical methods for measuring residues "should be specific for the substance assayed".2 Could this be interpreted that only a "specific analytical method" be used? Again, such an interpretation would wreak havoc with the biotech industry. If a specific analytical method was required, the statement would be more explicit. This statement is probably akin to that in FDA's guidance that for rinse samples, "a direct measurement of the residue or contaminant" should be made.1 Is TOC a direct measure of an organic active? I would argue that it is. This conundrum of what is meant in the PIC/S guidance should be recognized, but it should not deter us from using TOC appropriately for cleaning validation purposes.
Some believe that FDA's guidance document requires specific methods. What it actually says is that you should "determine the specificity and sensitivity of the analytical method...."1 This is a far cry from requiring specific methods. A more reasonable interpretation is that you should understand the specificity of your analytical method, and take that into consideration as you utilize that method so that it is used correctly. This brings us to the issue of using nonspecific methods such as TOC correctly. For simplicity, I will discuss the correct use of TOC. One FDA requirement is that the TOC appropriately oxidize and measure the organic species in the target residue.6 Therefore, you will perform analytical method validation using the residue and TOC to confirm the method's applicability. Applicability indicates that the target residue is appropriately oxidized, and that it is appropriately water soluble such that it can be measured.
Another requirement is that any detected carbon be attributed to the target residue. The carbon in a sample may be partially from the active, excipients and cleaning agent. However, we are not allowed to apportion the measured carbon among these different sources. If we use TOC, we must consider (as a worst-case assumption) that all the carbon is because of the target residue (the active, if that is the target residue). FDA also states that you "should limit background... as much as possible." Why? Because it is just good practice to decrease the background (the TOC blank) to as low and as consistent a value as possible. This is why low TOC water, ultra-clean swabs and precleaned vials are typically used for swabbing with TOC. A final requirement is determining sample stability to confirm method applicability under expected holding conditions (post sampling, before analysis). Of course, this last requirement is relevant to any analytical method. There are other TOC requirements that are common to all analytical methods, including performing sampling recovery studies.1 The bottom line is you can use TOC, but use it correctly. 
Summary
Recognition of these myths, and their lack both of scientific and written regulatory justification, can help companies avoid unnecessary work that adds little or no value to a cleaning validation programme.
References
1. http://www.fda.gov/ora/inspect_ref/igs/valid.html2. PIC/S Document PI 006-2 http://www.picscheme.org/
3. http://www.ich.org/
4. D. A. LeBlanc, "Why TOC is Acceptable", Cleaning Memos3, Cleaning Validation Technologies, 24–27 (2003).
5. FDA, Human Drug cGMP Notes, 1st Quarter 2002, PDA Letter 38(9), 9–13 (2002).
6. http://www.fda.gov/cder/guidance/cGMPs/equipment.htm
Destin A. LeBlanc is a consultant at Cleaning Validation Technologies, San Antonio, TX, USA.

Dispelling Cleaning Validation Myths: Part I B

By: Destin A. LeBlanc

Rinse sampling has also been misused by not performing recovery studies. A concern of FDA (expressed in its guidance document) is the dirty pot analogy.1 Do you determine the pot is clean by evaluating the pot or the rinse water? One obvious answer is to test the pot. However, another is to test the rinse water providing it can be established that any residue on the pot would be present in the rinse water. Just as recovery studies for swab sampling are done by spiking model surfaces with the target residue and then sampling by the swab procedure, rinse sampling recoveries should also be performed by spiking model surfaces with the target residue and performing rinse sampling on those surfaces to demonstrate quantitative recovery.
Lab rinse sampling recoveries cannot replicate production equipment rinsing. However, it is possible to simulate the rinsing conditions to demonstrate whether the rinsing process quantitatively removes surface residue. If it does demonstrate acceptable recovery, then the dirty pot analogy has been overcome. Where rinse sampling is used to demonstrate cleanliness of inaccessible surfaces (for example), rinse sampling recoveries should be performed to 'validate' that method. In this scenario, quantitative recovery is not 100% recovery. The acceptable recovery level is generally the same as that for swab sampling, which can vary from about 50–75%.
I believe the misuse of rinse sampling has lead to the myth that its use is unacceptable. Correct use of rinse sampling includes
  • Carefully defining and controlling the rinse conditions.
  • Performing a rinse recovery study.
  • Making sure what you analyse is a direct measure of the target residue.
  • Setting limits for that target residue (in the rinse solution) based on scientific principles.

Myth 2This is the idea that to use rinse sampling, you have to correlate it with swab sampling results. If what you mean is: "I need to make sure I get passing results by both swab and rinse samples," there may be an element of truth in this. However, if what is intended is that there should be a direct 1:1 (or similar) mathematical relationship between swab and rinse sampling values, then it is unreasonable to expect this to occur. Why? Swab sampling and rinse sampling measure two different things.
Swab sampling involves measuring the residue on a small area, which generally includes the worst-case locations (those most difficult to clean or likely to have unacceptable residue if cleaning is inadequate). However, rinse sampling covers a much larger surface area (perhaps the entire surface area of a manufacturing vessel), and, therefore, essentially averages the residue over all sampled surfaces. If failure occurs in swab sampling, it is reasonable to expect it to come from the worst-case locations and that, perhaps, other swabbed locations provide acceptable results.
If such is the case, it may be possible (if not probable) that a rinse sample will give acceptable results. But, if rinse sample results are unacceptable, you can expect that at least one swab sampling site should have failing results. The assumption in this is that you have calculated your limits appropriately (and did not do something such as set rinse limits based on compendia specifications for water).
I find it difficult to speculate on this myth's origins, except perhaps from an overzealous analytical group. Cleaning validation is hard enough in terms of ensuring necessary resources are available. Performing studies to mathematically 'correlate' swab and rinse sampling values does not add any value. What's more, do not expect them to mathematically correlate.
Myth 3
It is amazing how this one myth, that nonspecific methods are either unacceptable or less acceptable than specific methods, persists. Specific methods measure the target analyte (usually a given compound) in the presence of expected interferences.3 Specific methods include HPLC developed for a given compound.
Nonspecific methods measure a general property, but do not determine what compound that property is a result of. Methods include TOC and conductivity. TOC measures the organic carbon in a sample. In finished drug manufacture, the measured organic carbon in a cleaning validation swab sample may exist because of any combination of the active, excipient(s) and cleaning agent (as well as contributions for the blank, which could include the water, the swab and the vial).

Dispelling Cleaning Validation Myths: Part I A

By: Destin A. LeBlanc
Every regulated technology seems to come up with a list of what regulatory authorities supposedly say you should and should not do. Cleaning validation for pharmaceutical process manufacturing equipment is no different. Unfortunately, while many of these 'thou shalts' and 'thou shalt nots' have a partial basis in fact, they are actually distortions of the truth that come to have a life of their own. Hence I call them myths even though cleaning validation is only about 15 years old.
This article will explore eight of these myths and attempt to explain the origin of each (although in many cases the explanation of the origin is just speculation on my part). In addition, I will try to explain why the myth is wrong, how something seemingly prohibited can be properly used, and how those things apparently required may be unnecessary.
My list of myths is not intended to be exhaustive. The first three are examined in Part I of this article. Myths 4–8 will be covered in Part II to be published in a forthcoming issue.
1. Regulatory authorities do not like rinse sampling.
2. You must correlate rinse sampling results with swab sampling results.
3. You cannot use nonspecific analytical methods.
4. If you use total organic carbon (TOC), you must correlate it with a specific method, such as HPLC.
5. Any measured residue is unacceptable.
6. Dose-based calculations are unacceptable.
7. Recovery percentages of different spiked levels should be linear.
8. You cannot validate manual cleaning.
Myth 1

Key points
The notion that regulatory authorities do not like or allow rinse sampling is false. FDA's cleaning validation guidance says: "There are two general types of sampling that have been found to be acceptable. The most desirable is the direct method of sampling the surface of the equipment. Another method is the use of rinse solutions."1 Some may want to emphasize that because direct sampling (i.e., swab sampling) is more desirable, it must follow that rinse sampling is less desirable. Although there may be certain logic to this, it overlooks the clear statement that both methods are acceptable. The Pharmaceutical Inspection Cooperation Scheme (PIC/S) guidance document says: "There are two methods of sampling that are considered to be acceptable, direct surface sampling (swab method) and indirect sampling (use of rinse solutions)."2 Again, this is a clear statement that rinse sampling is acceptable. I should point out that the PIC/S document goes on to say that a "combination of the two methods is generally the most desirable."
Why, therefore, has the myth arisen that rinse sampling is unacceptable? In the early days of cleaning validation, rinse sampling was used inappropriately. For example, some companies using rinse sampling set limits such that the rinse sample was acceptable if it met compendia specifications. In other words, they worked by the maxim: "water-for-injection [WFI] in, WFI out, therefore, my equipment is clean." This use of rinse sampling is inappropriate, but still survives despite the fact that FDA's guidance document clearly states that "...it is not acceptable to simply test the rinse water for water quality (does it meet the compendia tests) rather than test it for potential contaminates [sic]."1 I should make it clear here that you can use TOC to measure a contaminant in the rinse water. However, the acceptance limit of TOC is not automatically 500 ppb: it must be justified based on traditional limit calculations, and may be higher or lower than 500 ppb. 

Sunday, August 15, 2010

Running a Marathon in Flip-Flops – Part 1: The Value of Incorporating Prerequisites into Process Validation3

Manufacturing and inspection instrument calibration verification. Another important factor that should be assessed during prerequisites verification efforts (i.e., prior to manufacturing runs) is verifying and documenting that each instrument used in the manufacturing and testing process and that requires periodic calibration is within the current calibration interval and that each will remain within that interval throughout the process validation activity. For example, a validation engineer managed a shipping validation project for a biopharmaceutical product using numerous rented temperature and humidity monitors. When the data was collected and reviewed, it was noted that several of the instruments had results just out of the specified ranges. Upon investigation, it was noted that numerous instruments used in the study went out of calibration during the process resulting in questionable results. All product shipped was then considered of questionable quality as was the study itself requiring a redo of the process and lost saleable product.
Raw material status verification. Just as the manufacturing equipment and utilities needed to produce a product must be able to perform within predetermined criteria, the raw materials that go into the product must also meet their predetermined specifications. As dictated by the good manufacturing practices (GMP) regulations, a raw material must be tested and approved prior to use. The acceptance of the raw materials called for in a process validation should be verified prior to use. While this may seem to be a redundant task, spot-checking this aspect of the materials management quality system prior to a critical effort such as process validation again makes good business sense versus being a specific regulatory requirement.
Consider the situation when a contract manufacturer received a purchase order to produce a liquid oral dosage pharmaceutical product for a new customer. Of course, this activity requires process validation for which the minimum of three consecutive batches for process validation was agreed to by both parties. The raw material lots were assigned for each of the raw materials to be used in the three process validation batches. As typically is the case with contract manufacturing, the time period for manufacturing each lot was dictated by the customer's order of the product. Due to an unanticipated lack in the customer's product sales, the third batch was manufactured more than a year after the first two validation batches were made. The shelf life of the active ingredient was only one year and it had therefore expired. However, no raw material status prerequisite check was performed prior to manufacture. Upon testing of the third lot, the quality control testing laboratory found the product samples to be subpotent.
An extensive investigation was conducted which resulted in the batch failing and all three consecutive batches for process validation having to be redone at the manufacturer's cost. This situation could have been avoided with a simple verification of raw material status prior to manufacture of each process validation batch. Analytical test method status verification. This verification is one of the more controversial prerequisite verifications to incorporate into the process validation program due to the perception that the laboratory is seen as independent of the production process. Nonetheless, as stated previously, the results obtained by the laboratory for a specific process are a critical piece in the overall process of manufacturing and releasing a quality product as the laboratory produces results on which many of the validation conclusions rely. Therefore, it is of paramount importance to verify and document that all the test methods have been validated (nonpharmacopeial methods) or shown to be suitable (pharmacopeial methods).
The purpose of performing this prerequisite verification is not to check the adequacy of the test method validation or suitability effort. Rather, it is a spot check to verify and document that method validation (if necessary) has been completed and closed out prior to moving forward with the costly and time consuming effort associated with process validation. As a recent example, a sterile pharmaceutical manufacturer undergoing a preapproval inspection was recently given a 483 observation when the agency investigator discovered that the finished product potency test for the drug product had not been validated prior to beginning the validation activity. The entire validation was called into question by the investigator and ultimately had to be repeated.
Specified process parameters verification. If a product has been thoroughly developed, all of the critical manufacturing process parameters (i.e., processing ranges) that are specified in the MBR are based upon results obtained during the process development effort and verified during the confirmation run or technology transfer phase.
However, many times one or more ranges specified in a MBR are not associated with any justification at all (i.e., where the range came from in the first place). While it may seem to be a worthy risk to simply run the process validation with specified yet unsubstantiated ranges (versus generating a development report retrospectively), it truly presents a significant risk. 
While never recommended, ranges that have not been challenged or assessed prior to process validation must be challenged during the process validation effort. This "dry run" approach during process validation has a significant cost factor if a "failure" occurs during execution of the runs. This is true even if the process is well-characterized and well-established. Without some sort of documentation supporting the range specified (e.g., a development report), a processing failure associated with a specified process parameter can only be assigned a defendable corrective and preventive action (CAPA) if it involves a thorough retrospective analysis of a statistically significant number of historical batches for which the specified process parameter data is obtainable. Of course, this would lead a savvy auditor to question the development of other parameters for other products as well. As you can see, this can be very costly on many fronts. The only way to avoid this situation prior to digging up the proverbial can of worms is to verify and document the origin of each specified process parameter present in the MBR prior to the execution of the process validation runs.
Product quality attributes verification.The purpose of this final process validation prerequisite is to verify and document that the in-process and finished product quality attributes match those in product development reports or are the most currently approved specifications reported in the product regulatory submission.
When a product has been approved in both the United States and countries outside the US this verification becomes even more important because product specifications for the same product can differ from country to country. For example, a solid dosage form manufacturer was undergoing a process (re)validation effort after making some process improvements. The product was approved for distribution in both the US and Canada. Prior to commencing the process validation runs, this prerequisite verification of the product quality attributes was conducted, at which time it was recognized that the impurity specification differed between the two countries for the same product. The Canadian specifications were tighter than the US specifications. However, only the U.S. values were listed in the validation protocol. If this prerequisite had not been verified prior to performing the production runs, the process validation effort may have resulted in problems meeting the more strict Canadian requirements.
ConclusionIn order to compete in the Boston Marathon, runners must demonstrate to the race organizers that they are ready to compete, so the unqualified entrants are weeded out of this prestigious event. In addition, the qualified runners check their own gear before the event as they want to maximize their chances to succeed.
The same concept applies to process validation. By using the process validation prerequisite approach, many of the potential pitfalls and hazards along the process validation route can be avoided before the costly production runs and laboratory testing.
Not only does this approach make good economical sense, but using this approach can also demonstrate, during government and customer audits, that quality is built into the process, and the quality systems approach to regulated product manufacturing is alive and well in your facility.
Nancy Cafmeyer, a consultant at Advanced Biomedical Consulting (ABC), LLC, with over 28 years industry experience has consulted at numerous pharmaceutical, nutritional supplement, and medical device manufacturers and prior to working for ABC has held both hand-on and management positions at companies such as King Pharmaceutical, Geopharma, and Daniels Pharmaceuticals.
Jonathan M. Lewis, a principal at Advanced Biomedical Consulting (ABC), LLC, has consulted at over 50 different biopharmaceutical, pharmaceutical, and medical device manufacturers and prior to starting ABC has held both hand-on and management positions at companies such as Cardinal Health, KMI, and PAREXEL International.
Advanced Biomedical Consulting (ABC), LLC, PO Box 76405, St. Petersburg, FL 33734, tel. 888.671.4292, fax 727.897.9522,
http://www.abcforfda.com/

References
1. I.R. Gerry and R.A. Nash, Eds. Pharmaceutical Process Validation, (Marcel Decker, Inc., New York, 2nd ed., 1993), pp. xiii-24.
2. Code of Federal Regulations, Title 21, Food and Drugs, Part 211, (FDA, Department of Health and Human Services, Rockville, MD, April 1, 2006).
3. Guideline on General Principles of Process Validation, (FDA, Rockville, MD, May 1987).
4. Compliance Policy Guide Manual, Chapter 4, Process Validation Requirements for Drug Products and Active Pharmaceutical Ingredients Subject to Pre-Market Approval, Document 7132c.08, (FDA, Rockville, MD, 2006).
5. Guidance for Industry, Q7A, Good Manufacturing Practice Guidance for Active Pharmaceutical Ingredients, (FDA, Rockville, MD, August 2001).

Running a Marathon in Flip-Flops – Part 1: The Value of Incorporating Prerequisites into Process Validation 2

If the effective version generated during protocol generation is the same as what is being used on the production floor, then it is safe to initiate the process validation production runs. If not, there is a high probability that the protocol may be inaccurate (possibly resulting in numerous "failures") or that the process itself is not ready for process validation or even worse, commercial production.
For example, during a recent process validation activity at a liquid dosage pharmaceutical plant, modifications were made to the MBR less than a day before the already approved protocol was to be executed. Certain processes were modified without the knowledge and consent of the validation team. As a result, there were numerous deviations (i.e., investigations) that needed to be documented and addressed during the execution of the process validation production runs. This was due to the approved protocol not stating the correct directions to follow, which resulted in a big waste of time and money-let alone questioning compliance (i.e., the ability of the quality system to catch issues prior to and during production). If the MBR status was verified as a prerequisite, this issue would have been caught prior to executing the runs.
Operator and test personnel training verification. In manufacturing as well as in the analytical laboratory, many standard operating procedures (SOPs) and analytical test procedures are used. As the purpose of process validation is to provide assurance of the repeatability of a process, operators and analysts must be trained on all procedures that may affect the manufacturing and testing of the process. This prerequisite checks the training records of the operators and laboratory testing analysts to ensure that they have documented training on the procedures that they will be performing during the process validation activity. Again, not only is this a compliance risk, but it is also good business practice as failures due purely to untrained operator or analyst errors result in additional consecutive process validation production runs (i.e., avoidable wastes of time and money).
For example, during a recent pre-approval inspection of a pharmaceutical manufacturer, an investigator was reviewing the executed process validation protocol for the product being assessed. The investigator asked to see the training records for two of the analysts who performed the release testing on the finished lot of product. When given those records, the company realized that the two analysts had not been trained on the test procedures. This situation called into question the validity of the test results and ended in the company repeating the costly and time consuming testing. This situation would have been easily avoided by verifying training prior to execution. Equipment and utility system qualification verification. Just as an individual marathon runner chooses a very specific pair of running shoes to compete in versus a pair of everyday flip flops, equipment and utility systems are two of the most critical areas affecting the outcome of a manufacturing process. It is important to verify that the commercial equipment and support utility systems have first been qualified and second have been qualified within the specified process ranges prior to executing the process validation manufacturing runs.
Not only is the lack of equipment or utility system qualification a common gap discovered during inspections, and for which entire process validation efforts been disregarded, but many unforeseen commercial production issues may arise when these activities have not been completed prior to process validation production runs. This situation was clearly demonstrated when a coating process for a solid oral dosage pharmaceutical was developed and optimized at a specific spray rate using a process development pan coater. The pan coater used during the process validation runs, although similar in function to the process development pan coater, was not challenged during equipment qualification at a spray rate that bracketed the intended use. When the process went into validation, the difference in the spray nozzles caused the commercial pan coater to be unable to consistently obtain the specified MBR specifications for spray rate.
In this case, the entire batch was lost because the problem was discovered after the coating process was already in progress. A prerequisite verification of equipment qualification would have avoided the loss of a potentially saleable batch as well as the requirement to run a new set of consecutive process validation batches.

Pharmaceutical Validation Documentation Requirements

Pharmaceutical validation is a critical process that ensures that pharmaceutical products meet the desired quality standards and are safe fo...