Friday, June 19, 2009

System Validation Requirements

Kevin Lloyd is vice president, chief technology officer of State College, PA-based Centre Analytical Laboratories.

In 1997 the U.S. Food and Drug Administration, in response to requests from industry, issued a regulation specifying criteria for the acceptance of electronic records and electronic signatures as the legal equivalent to manual, paper- based systems. The citation for this regulation can be found in the federal register as 21CFR11.

Citation 21CFR11, also known as Part 11, offers many advantages to industry, including a shortened period for agency review, nearly instantaneous reconstruction of studies, and enhanced confidentiality through limited authorized system access.

In order for an organization to reap the benefits of Part 11, there are many requirements that must be met. Due to the wide scope of Part 11 this article will focus only on the validation requirement. In future articles we will cover the full scope of the regulation.

For an organization to be compliant with Part 11, a number of requirements must be met. See Table 1 (in the print version) for a list of these requirements.

Due to the wide scope of Part 11, this article will focus on system validation requirements only. According to information posted on the FDA’s web site, validation is the establishment of documented evidence that provides a high degree of assurance that a specific process will consistently yield a product meeting its predetermined specifications and quality attributes. Implied by this statement: “If you didn’t document it, you didn’t do it.”

At some point in time, the FDA will inspect every company operating within an FDA-regulated environment. Although the FDA has made it clear that inspections will not be conducted solely for Part 11 compliance, it is well within its purview to inspect for Part 11 compliance within the context of a general audit.

A review of www.fda.gov reveals some common FDA-483 citation letters that have been recorded at previously inspected sites. These citations pertain to absent or incomplete records and/or practices. From the 483s, the following assumptions can be made regarding what the FDA may be looking for when they audit a facility:

• High-level hardware/software documentation, including diagrams and narratives detailing all computer programs and their relationships to each other

• Comprehensive inventories, including operating systems software, terminal emulators and client PC configurations

• Change control and error reporting SOPs

• IT personnel training records

• Software versions and installation dates

• Error reporting logs

• Evidence of proper handling of raw data files and metadata

• Archiving/backup procedures and responsibilities

• Hardware maintenance records

• Software requirements/design/testing documents

• Problem tracking

• Environmental monitoring specifications

• Disaster recovery procedures

It is clear from these requirements that a comprehensive, systematic plan is necessary to address the general expectations listed above, the Part 11 regulations and any applicable predicate rules. This plan can be divided into four phases.


Phase 1: Management Buy-In
Management buy-in is essential to any significant project in any company. The role of management in the validation program is to foster company-wide support for compliance, to make personnel available for appropriate responsibilities and to approve the expenditures necessary for implementation. It is essential to the success of a validation project to not underestimate the commitment required. Our 80-person company has invested 2 FTEs (1.5 man-years) since Jan 1, 1999 to this effort. The project also required the creation of a new position, CSVS (computer software validation specialist), whose full-time job is to ensure Part 11 compliance.

It is also essential to identify an internal “champion” to ensure the success of the computer validation effort. This person should be management-level within the company and have the authority to make key decisions and commit resources to the project. This person should also have the authority to set and enforce policy.

Another requirement for a successful validation is input from disparate functional areas of the company, usually accomplished by the formation of a validation committee. Within this committee, all affected departments must be represented, including IT and quality assurance. All participants must be clearly identified and held accountable to the objectives of the group.

Finally, hire a consultant! This may be costly; however, the project is large, complex and dynamic. It is my opinion that the best way to get started is with the help of an experienced consultant. Our project was nearly 100% dependent on external help at the beginning. However, we have gained experience and confidence and expect to be completely self-sufficient by mid-2001.


Phase 2: Develop SOPs To Address Operation and Maintenance of the Data Servers and Network
Even before the advent of Part 11, most companies understood the need for reliable information backup strategies, the value of a disaster recovery plan and the need for system security. It is the first task of the validation committee to identify all SOPs currently in place that relate to a company’s computer systems. Specifically, the following areas must be addressed:

• Computer System Network Backup and Recovery

• Computer System Data Archive

• Computer System Disaster Recovery

• Computer System Security

• Computer System User Administration

• Computer System Server/Network Monitoring

• Computer System Media Lifecycle

• Computer System Maintenance

As stated, some of these SOPs may already exist. If so, they should be reviewed by the committee and re-evaluated as to their relevance to current and expected practices. If any SOP is out of date or absent, it must be revised or retired and replaced. Computer system SOPs must not only be written, they must also be followed. Regulatory compliance is not satisfied by a well-written SOP; it is the job of the computer software validation specialist to ensure that day-to-day activities are consistent with the SOPs once they are implemented.


Phase 3: Validation System Development
At this point, a validation program must be formally established through a series of SOPs. It would be unusual for pre-existing SOPs to adequately address all aspects of validation system development, although some current SOPs (change control, inventories, etc.) may already be in use. A consultant can be a valuable resource for this phase, providing sample industry standard validation practices that can serve as baseline documents. It is then the task of the validation committee to modify these baseline procedures into detailed practices and procedures that are functional for their specific company. The following SOPs must be written for a successful validation system.

• Computer System Validation Policy

• Computer System Validation Program

• Computer System Change Control Policy

• Development of Computer System Requirements Documentation

• Development of Computer System Design Documentation

• Development of Computer System Test Documentation

• Computer System Configuration Management Plan

• Computer System Software and Hardware Inventory

• Computer System Environments

• Computer System Vendor Evaluation


Phase 4: Conduct a “Gap” Analysis
Once all the policy statements and SOPs are in place, it is necessary to conduct a “gap” analysis. The purpose of this exercise is to represent visually all system validation requirements. Draw a grid with the developed requirements along the ordinate and the individual computer systems along the abscissa. Physically test each system according to the requirements. Any computer system that does not meet one or more of the requirements of the validation program must be considered a “gap.” Gaps require remediation. Compliance only exists when there are no gaps.

Before we look at specific required SOP elements, it is necessary to define the software life cycle and categories of software. These definitions are fundamental to the validation effort.


Software Life Cycle
The graphical representation below is a waterfall model. (There are other acceptable models, but this one, in my opinion, is the simplest and most effective.) The software life cycle is based on the concept that software moves through a series of steps: it is born, it is tested against specs, it is maintained and it is finally retired. The waterfall model depicts this as a series of successive events. At every step, there is documentation to verify compliance.


Categories of Software
According to information presented at the Good Automated Manufacturing Practices (GAMP) ’96 conference, there are five categories of software. Each category has its own validation requirements. Table 2 (in the print version) provides an example of each category.


Computer Validation Policy
The computer validation policy is a high-level SOP that spells out in very general terms the goals of the validation program. An analogy is the corporate mission statement. Most companies have a statement that identifies broad goals and a level of commitment required of each employee of the company. The computer validation policy is similar except that its scope is limited to computer validation.

The following is an example of a computer validation policy for an analytical testing laboratory. It is important to note that the system life cycle features prominently in this policy. This is a foundation concept, around which the rest of the validation program is built.

“To support our goal of establishing ourselves as a premier supplier of analytical laboratory services, XXX shall develop and maintain standards and procedures whereby all computer systems in the company shall be implemented and maintained in a validated state according to FDA computer systems validation guidelines. It shall be the policy of XXX that validation of computer systems used to generate, manipulate, store or transmit data related to all regulated products or services shall follow a System Life Cycle (SLC) approach. This approach shall include documentation of computer system requirements and design specifications, thorough and documented testing of the computer system for compliance, and documented procedures to ensure consistent operation and maintenance of the system in a validated state until it is retired.”

The computer system validation program is another core SOP. This document also has a high-level mission statement. An example is given below:

“At its most basic level, validation is the demonstration of control through documented evidence during development and routine operation of a computer system. This control provides a high degree of assurance that a computer system will consistently yield a product or result meeting its predetermined specification and quality attributes. The process by which computer systems are brought into a validated state and then maintained in that validated state is the facilities validation program.”

Following this general statement, the computer system validation program SOP should detail specific actions to be taken to bring the facility into compliance. The first step, the procedure development phase, sets in stone all the elements necessary for the validation program. There are two distinct types of supporting SOPs required: (1) procedures for the operation and maintenance of the data servers and network, and (2) computer validation procedures. Development can be a time- and resource-consuming task, as it typically requires the efforts of the validation committee as well as help from an external consultant, over multiple iterations of each SOP, until the SOPs reflect the needs of each department.

Once policy SOPs have been developed, it is necessary to conduct a system inventory of all new and legacy systems. Assign a unique number to identify each computer system. Inventory each system and enter the records into a database. All information about the system will be recorded, including hardware configuration such as hard drive size, CPU clock speed and amount of random access memory, as well as a detailed inventory of the software on the system, including operating system version and build number. All new systems records should also include the installation dates of each component.

Once the inventory is complete, a gap analysis can be conducted to identify discrepancies between requirements (the validation program) and what has actually been done for each computer system. Once the gap analysis is completed, the remediation can begin, based upon the remaining elements in the validation program.

Requirements Phase: A requirements definition must be written for each system to identify specific operating and use requirements for the computer system. Requirements are documented and approved by management. The level of detail should be sufficient to support the design phase, commensurate with the GAMP ’96 complexity model. In other words, once the requirements are defined, they will be used as the basis of the design phase. The testing phase will assure that all requirements elements have been accounted for in the design phase.

During the requirements phase, stakeholders from each department that may ultimately use the system meet to discuss features of importance to their respective groups. For example, consider a custom temperature-logging application: QA may be interested in the calibration features of the application; operations may be interested in the ease of use while management may be interested in the initial cost and cost of operation. Once again, it is imperative that all groups are represented and all requirements are defined in this phase.

Design Phase: The design phase follows the requirements phase. The purpose is to assure that each element defined in the requirements phase is addressed. Level of detail is commensurate with the GAMP ’96 complexity model, which states, for example, that there is a difference between commercial off-the-shelf (COTS) software and a fully customized system: the former requires much less validation than the latter.

Implementation Phase: During the implementation phase all elements of the system, as defined in the design phase, are brought together. Elements may include COTS or custom code, or both. The product is a fully functioning piece of software that meets the original requirements.

Testing Phase:
During this phase, the computer system is tested against specifications in the requirements definition phase. Tests are structured to provide traceability to both design and requirements documentation. Testing has three elements:

• Installation Testing – e.g., environment, power, plumbing

• Operational Testing – e.g., each individual function

• Performance Testing – e.g., maximum users, maximum data flow

Once testing is complete (successful) and the system is put into production, it enters a new phase: routine operation and maintenance. During this phase, the IT department must provide support and consultation to end-users. A possible outcome of this interaction may be maintenance. If so, all maintenance must be performed in a very controlled manner, under the conditions of the change control SOP. Any changes to the system inventory would also be updated at that time.

Finally, there is a decommissioning phase. At some point, it becomes easier to replace rather than maintain the current system. Starting over can be as simple as a new version/upgrade of currently used software or it can be as radical as replacement of the current system with that of a competing vendor. In any case, all data is archived and/or migrated to the new system. The old system is then removed from operation and a full audit trail of the dates and reasons for decommissioning is kept.

Change Control: The final SOP in the validation program defines how changes to a validated computer system are managed and documented. Changes are tested on a non-production system before they are implemented on the target system. The computer systems validation specialist must evaluate the impact of the change on the validation system. If it is significant change, revalidation may be required. Minor changes may require only documentation.

It is my opinion that a custom database application is the best way to implement change control. This system would include the following steps, each designed to demonstrate authorization and provide a documentation trail:

• Initiation – Anyone within the company can identify, recommend or initiate a possible change

• Acknowledgement – IT acknowledges receipt of the initiation (by e-mail, for convenience)

• Pre-approval – IT reviews the request, evaluates initial feasibility

• Implementation – IT executes the change

• Resolution – IT documents resolution of the problem/ change

• Testing – Demonstration of compliance and documentation of results

• Final approval – Acknowledgement of the action/resolution by the initiator, network administrator and the computer software validation specialist

• Update – Manual entry to system inventories reflecting the changes

• Audit – Performed by the computer software validation specialist

This article has focused on the validation requirement of Part 11, specifically, the high-level procedural SOPs that make up an effective validation program. Future articles will address specific objectives of the requirements, design and testing SOPs, as well as specific requirements of the operation and maintenance of the data servers and network.

Validation Requirements

This article is the third in a series of articles on compliance with the electronic signatures/electronic records rule found in 21CFR11. The previous articles summarized the requirements of Part 11 and specifically looked at the high-level, policy SOPs necessary to satisfy the validation requirement as well as requirements and design documentation.

This article will also focus on the validation requirement of Part 11. Here we will look at the specific elements necessary to meet the validation requirements for each computer system for test documentation.


Test Documentation
Following a System Life Cycle model, test documentation is developed based on the requirements and system functionality as defined in the system requirements documentation. The test documentation is used to verify that each requirement and function defined in the requirements documentation has been implemented in the design of the system.

Test documentation covers both hardware and software. The testing of the system is generally divided into three sections: Installation testing verifies that the physical installation of the system meets the defined requirements; Operations testing verifies that the system performs the defined system functionality; Performance testing verifies that the system will operate at the extreme boundaries of limits of the requirements and functionality, i.e., maximum volume and system stress. Installation testing and some form of operational testing are performed on all systems, while Performance testing is used for scalable systems and custom (categories 4 and 5) systems.

First, we must discuss the guidelines for the development of test documentation, defining how to write test cases and what tests are required for each complexity category model. Then, we will define how to document the execution of the test cases.

The guidelines presented in this section for the development of testing documentation shall follow the complexity model presented in Table 1 (p. 54).

Regardless of the complexity of the system, the following sections are required in any testing document.

Header and Footer: The header and footer should follow the format generally used in company SOP documents. For example, headers usually contain standard elements (company name), document type ("Requirements Document," for example), and title of the document ( "Requirements Documentation for LIMS version 2.5.") The footer usually contains at a minimum the page number ("Page 5 of 35.")

Approvals: All requirements documents shall have an approvals section where responsible persons will sign. I would suggest the following persons: the author of the document, the computer systems validation specialist and a member from IT.

Introduction: The introduction is a brief statement of what the system is, what the system does and where it is to be located.

Ownership: The ownership section is a statement identifying the owner of the system. The statement of ownership is written in a generic sense, identifying the department that owns the system and the department manager as the responsible person.

Overview: The overview is a detailed description of the system, indicating what the system is expected to do and how it fits into the operation of the department. If the system described in the requirements document is a replacement for an existing system, a brief description of the current system should be included. Enough detail should be included in this overview to give the reader an understanding of the system and what it does without going into discrete functions.


General Instructions to Testers
This section defines the procedures and documentation practices to be followed when executing the test cases. A further explanation of these procedures and practices are presented later in this document.

Discrepancy Reports: This section defines the procedures to be followed when the actual results of the test do not match the expected results.

Signature Log: This section is used to identify all personnel participating in the execution of the test cases. The person’s printed name, full signature and initials are recorded at the beginning of the execution of the test cases.

References: This section identifies the appropriate reference documents pertinent to the execution of the test cases. These documents shall include the SOP, the appropriate requirements documentation, and the appropriate design documentation.

Prerequisite Documentation: This section lists the validation documentation such as the requirements documentation and the design documentation that is to be in place before the execution of the test cases. The first test case shall be verification that these prerequisites have been met.

Test Equipment Log: This section is a log in which all calibrated test and measuring equipment used during the execution of the test cases is logged.

Test Cases: This section presents the test cases used to verify that the requirements and functions defined in the requirements documentation have been met. Each test case shall test one requirement or function of the system. In some cases, several closely related requirements or functions might be verified in one test case. Test cases shall be written in a landscape page setup using a table format and include the following elements:

• Objective – This is a brief statement indicating what requirement, function or module of the system the test case is intended to verify.

• System Prerequisite – This section describes any previously inputted data or other system status that must be in place to properly execute the test case. For example, when testing system security, users of various security levels may need to be in the system in order to test security levels at login.

• Input Specifications – This section defines any specific input data required to execute the test other than keystroke entries. This may include instrument test data files, barcode files, etc. A specific data file may be identified or the file identified may be generic.

• Output Specifications – This section defines the expected output of the test case other than output to the monitor. The output may be identified as reports, data files, etc.

• Special Procedural Requirements – This section details any special considerations that may be required to successfully execute the test case.

• Test Procedure – The test procedure is setup in a table format with the following column headings:


Procedural Steps – This is a systematic series of instructions to be followed in the execution of the test. These steps should be sufficiently detailed as to allow the test to be duplicated by any qualified personnel without changing the outcome of the test.

Expected Result – For each procedural step, the expected outcome of that step should be defined. The defined outcome should be detailed enough to allow an unequivocal determination of the pass/fail status of the step.

Actual Result – This column is to be left blank and completed by the person executing the test when the step is executed. The actual result of the step is recorded at the time the test is executed.

Pass/Fail – This column is used to record the Pass/Fail status of the test by comparing the actual result to the expected result.

Tester Initials and Date

• Comments – This section is complete following execution of the test case and is used to record any discrepancies or details not captured elsewhere in the test script.

• Test Conclusion – This is an indication of the Pass/Fail status of the test case.

• Tester’s Signature – This section records the signature of the person executing the test case and date.

• Verification Signature – This section re-cords the signature of the person verify ing the test results and date.


Requirements Traceability
Each requirement and function defined or described in the requirements documentation shall be reflected by one or more test cases. Following completion of the test documentation, the requirements documentation shall be reviewed to ensure that each requirement is reflected in the test documentation. The section number of the test case that fulfils the requirement shall be recorded in the second cell of the table in the right margin of the page. This provides a cross-reference between the requirement and the test case and ensures that the requirements are being completely verified.

As requirements of the system change, test cases that no longer have an active requirement shall be voided, not deleted. To void a test case, delete the test but not the section number, then enter "VOID" in place of the text. This prevents sections from being renumbered after a deletion and invalidating the references in the requirements document. This also eliminates the potential for confusion caused by re-assigning a section number previously used for an unrelated design element.


Considerations for Developing Test Cases
Installation Testing
Hardware Installation Testing – Hardware installation test documentation provides a high degree of assurance that the hardware has been installed according to the vendor’s specifications and configured in accordance with the requirements and design documentation for the system. This may include access space, power supply/UPS, network communications, interconnecting wiring/ cabling, ambient temperature, ambient relative humidity and peripheral systems connections.

Software Installation Testing – Software installation test documentation also provides a high degree of assurance that the software has been installed according to the vendor’s specifications and configured in accordance with the requirements and design documentation for the system. Hardware installation testing must be performed before installation and testing software. Software installation testing applies to all software components that are a part of the system including operating system software, network and communications software, OEM software and custom software. It should also include virus checking, verification of required drive space, RAM space and drive configuration, software version numbers, security access for installation and operation, directory configuration, path modifications and any system parameter configuration.


Operational Testing
Operational test documentation provides documented evidence that the system fulfils the requirements and functions as defined on the requirements documentation. Each function described in the requirements documentation shall be tested independently.

When a function is defined with a specified range of inputs or range for a data field, that function shall be tested at each end of the range. For example, if the range of acceptable inputs is 1 to 5, the test case shall challenge the functions using inputs of 0, 1, 5 and 6, with 0 and 6 expected to fail.

Test cases shall also test to verify that illogical inputs are not accepted. For example, test cases shall challenge a date function with non-date inputs, or a numeric field with non-numeric characters.

For COTS applications, many times the vendor will supply the test cases. These may be used in lieu of test cases developed in-house. There shall be a documented review of the test cases provided by the vendor relative to the functional documentation to ensure that the vendor test cases are sufficient. For applications where vendor-supplied test cases are used, requirements for the various functions of the application shall not be required, provided there is documented evidence that the vendor has followed a validation methodology.


Performance Testing
Performance testing ensures that the system shall stand up to daily use. Test cases during performance testing shall verify that the system can function properly during times of high user input and high data throughput.

Performance testing is not always applicable. For systems with only a single user, stress on the system is inherently limited. Performance testing is usually executed on complex systems with multiple inputs and outputs as well as network-based systems.


Test Execution
Before the start of testing, all personnel participating in the execution shall enter their names, signatures and initials in the signature log of the test documentation.

Once execution of a test case is started, that test case must be completed before moving to the next test case. An exception to this would be if the test case fails, the failure is noted and the next test case is started. Execution of the entire set of test cases does not have to be completed in one sitting, though once testing begins, the systems may not be altered. Any failed tests shall be recorded and testing shall continue unless the failure prevents continuation. If testing must be discontinued in order to correct any issues in the system, then all tests must be re-executed.

As test cases are executed, information shall be recorded neatly and legibly in black ink. Sign-off and dating of the test case shall occur on the day that the test case was executed. Mistakes made during execution of the test cases shall be corrected using a single strikethrough, so as not to obscure the original data. Any corrections made in this manner shall be initialed and dated by the person making the correction.

Errors in the test script shall be corrected using a single strikethrough so as not to obscure the original data. Any corrections made in this manner shall be initialed and dated by the person making the correction. Any corrections made to the test case during execution shall be justified in the comments section of the test case.

Completed test forms shall not contain any blank spots where information might be entered. If an item does not apply to the current test, the tester should fill in a ‘N/A’ followed by an initial and date. Completed tests are reviewed and approved by the Computer Systems Validation Specialist, or designee, who signs and dates the bottom of each approved test page.

Test Failures and Discrepancy Reports
Results of tests that do not match the expected results shall be considered failures. The failure of a single step of the test case shall force the failure of the test case. However, if a step in the test case fails, execution of the test case shall continue unless the failure prevents completion of the test case.

Errors in the test case that would result in test case failure if left alone may be corrected as noted above. This correction must be justified in the comments section of the test case. Steps in the test case where the expected result is in error shall not be considered test failures if corrected. Test failures shall be noted in the test case and documented on a test discrepancy form. The form is then logged to facilitate tracking of errors during system testing.

Upon resolution of the failure, the cause of the failure is examined with respect to the failed test case and any similar test cases. All test cases associated with, or similar to, the resolved failed test case shall be reviewed to determine the extent of re-testing required. This re-testing, commonly referred to as regression testing, shall verify that the resolution of the failed test has not created adverse effects on areas of the system already tested. The analysis of the testing shall be documented to justify the extent of the regression testing.

Regression testing shall be executed using new copies of the original test script to ensure that the requirement of the system is still being met. In some instances, resolution of the failed test requires extensive redevelopment of the system. In these cases, a new test case must be developed. In either case, the failed test shall be annotated to indicate the tests executed to demonstrate resolution. The additional tests shall be attached to the discrepancy report. This provides a paper trail from the failed test case, to the discrepancy report and finally to the repeated test cases.

Test Documentation by Complexity Model

Test documentation varies with complexity category. Com-plexity categories are defined in Table 1 above.

Category 1 – Operating Systems: Category 1 systems are local operating systems and network systems. These systems provide the backbone needed by all other systems to operate. Due to widespread use of these applications, they do not need to be tested directly. As these systems are the base of other applications, they are indirectly tested during the testing of other applications.

Category 2 – Smart Instruments: Category 2 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 2 systems. Installation and Operational Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system. Performance Testing is not required.

Category 3 – COTS Applications: All functions and operations embedded by the manufacturer of the COTS do not need to be tested. Only those functions and operations used by the applications developed in-house require testing.

Category 3 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 3 systems. Installation and Operational Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system. Performance Testing is not required.

Category 4 – Configurable Software Systems: Category 4 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 2 systems. Installation and Operational Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system. Performance Testing should be considered if the system shares data on a network.

Category 5 – Fully Custom Systems: Category 5 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 5 systems. Installation, Opera-tional Testing and Performance Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system.


Following a System Life Cycle model, test documentation verifies that all requirements have been properly met in the design phase of software development. Future articles will follow the SLC model into the next phase of software validation: change control.

Electronic Records & Electronic Signatures

When 21 CFR Part 11 was released on March 20, 1997, it was given an effective date of August 20, 1997. By any measure, Part 11 was a surprise to the healthcare manufacturing industry. Many of us had waited for the Agency's approval to use electronic signatures, and the concerns of industry proponents about electronic signatures centered on the belief that the Agency would allow for their use only after the incorporation of various complicated security biometrics. We expected that the provisions for electronic signatures would potentially include requirements for retinal scans, thumb prints, voice identification, etc.

When Part 11 was released, the security control requirements for electronic signatures were fairly straightforward and benign. The requirements for electronic signature manifestations and the use of a dual user's identification and password were very clear and reasonable. But the section of Part 11 that dealt with electronic records was anything but benign. That section required predicate rule-mandated records created and maintained electronically, to comply with the Part 11 requirements, i.e., audit trail, system security, system self-check, etc. There was no provision for grandfathering legacy systems into compliance with Part 11. This is a big deal, impacting literally thousands of legacy systems in the regulated industry. Furthermore there was no provision for a grace period.

Part 11 was not widely reviewed or discussed prior to its effective date, and many quality and regulatory professionals stumbled into the legacy system impact of Part 11 only after they began to read and study the rule in anticipation of pursuing the application of electronic signatures. In the last two years, the industry has begun to understand more fully the implications and impact of the final rule on its computerized systems. The rule does not create any new record or signature requirements. The use of electronic records as well as their submission to FDA is voluntary. The agency can use regulatory discretion and compliance expectations may be realized gradually.

The realities of Part 11 include the following facts: We are now more than four-and-a-half years past the effective date and Part 11 is not going to go away. Our booming e-commerce industry will only strengthen the need for controls of electronic records and signatures. The FDA provided for only a five-month implementation period so, as a result, the industry has been trying to work out of a state of noncompliance. We should be past grousing and complaining about Part 11 and well into trying to understand it and implementing remediation plans.

Definition and Field
An electronic record is defined as any combination of text, graphics, data, audio, pictorial or other information representation in digital form that is created, modified, maintained, archived, retrieved or distributed by a computer system, and is applicable to records required by any other FDA regulation and applicable to records submitted to FDA under the Food, Drug &Cosmetic Act or the Public Health Service Act, even if not required by FDA. The goal of the regulation is to provide a framework and set of rules for developing sound business practices to ensure the trustworthiness and reliability of electronic data, documents and signatures that are transmitted to FDA. It requires that industry demonstrate its ability to develop and maintain reliable and secure computer systems and sound business processes around these systems. Specifically, the rule applies to data captured in a computer system (electronic records) and signatures or authorizations generated by a computer (electronic signatures) as well as the security controls and business processes associated with them.

Electronic Records Provisions
Closed Systems: A closed system is defined as an environment in which system access is controlled by persons who are responsible for the content of electronic records that are on the system. Controls for closed systems:

Establish minimum controls for all systems;
Are designed to assure authenticity, integrity and confidentiality (as appropriate);
Are designed to ensure that the signer cannot readily repudiate the signature as genuine;
Validate the systems to ensure accuracy, reliability, consistent intended performance and the ability to discern invalid or altered records;
Maintain the ability to generate accurate and complete records in human readable and electronic form so that FDA may inspect, review and copy the records;
Protect records so that they are readily retrievable throughout the retention period;
Limit system access to authorized individuals;
Use secure, computer-generated, time-stamped audit trails for operator entries and actions;
Do not obscure previous entries;
Use operational system checks to enforce sequencing steps;
Use authority checks to ensure that only authorized individuals can access and use the system;
Use device checks to determine the validity of data input or operational instructions;
Ensure appropriate training of users, developers and maintenance staff;
Establish and follow written policies that deter falsification of records and signatures;
Establish adequate controls over the distribution of, access to, and use of system documentation;
Establish adequate controls over revisions and changes and maintain audit trails of modifications to system documents.

Open Systems: An open system is defined as an environment in which system access is not controlled by persons who are responsible for the content of electronic records that are on the system. Controls for open systems:
Ensure authenticity, integrity and confidentiality (as appropriate) of records from point of creation to point of receipt
Employ all of the controls required for closed systems
Implement document encryption.
Implement digital signatures.

Hybrid System: A hybrid system is defined as a system for which handwritten signatures executed on paper and paper-based records (if applicable) are maintained in addition to electronic records. The controls for hybrid systems are a combination of the above two systems.

Signature/record linking:
Applies to electronic and handwritten signatures.
Must ensure that the signatures cannot be excised, copied or otherwise transferred to falsify an electronic record.

Signature Manifestations:
Signed electronic records must include:

- Printed name of the signer

- The date and time of the signature

- The meaning of the signature (e.g., review, approval, authorship)
Electronic signatures are subject to same controls as electronic records.
The information required must be included in any human-readable copy of the record.

Electronic Signatures Provisions
Part 11 defines specific requirements for the design, use and implementation of computer systems that create, modify, maintain, archive and retrieve electronic records with or without electronic signatures. These requirements can be achieved either by technical or procedural implementation. Some requirements may include both a technical solution in the design of the system and a procedural process. Procedural processes may be used also as interim solutions while technical solutions are being developed and implemented. The electronic signature must be unique to an individual and not reassigned, and the identity of the individual must be verified by organization. It must be certified. The FDA example is given below:

"This is to certify that {Company X} intends that all electronic signatures executed by our employees, agents or representatives, located anywhere in the world, are the legally binding equivalent of traditional handwritten signatures."

Electronic signature components and controls:
Non-biometric signatures must consist of two distinct components (e.g., an identification code and a password).
In one continuous session, the first signing must use all components; subsequent signings may use just one component.
Non-continuous session: use all components of the electronic signature.
Must be used only by their genuine owner.
Administered and executed to ensure that use by others is precluded and that any attempted use would require collaboration by two or more individuals.
Biometric signatures are a method of verifying identity based on measurement of an individual's physical feature(s) or repeatable action(s) where the features and/or actions are both unique to that individual and measurable.
Voice Prints, handprints, retinal scans

Controls for identification codes/passwords:
Ensure no two individuals have the same combination.
Ensure that identification codes and passwords are periodically checked, recalled or revised.
Electronically deauthorize lost, stolen, missing, or compromised tokens, cards and devices.
Subject replacements to rigorous controls.
Conduct initial and periodic tests of tokens and cards for function.
Use transaction safeguards to:

- Prevent unauthorized use of passwords and identification codes.

- Detect and report (in an immediate and urgent manner) attempts at unauthorized use.

Audit Trail
One of the biggest concerns regarding Part 11 compliance is defining when the audit trail begins. Take a pragmatic approach, proceduralize it, adhere to it and be prepared to defend it. Audit trail initiation requirements for data should be different from audit trail initiation requirements for textual materials, such as operating procedures, reports or guidelines. If you are generating, retaining, importing or exporting any electronic data, the audit trail begins from the instant the data hits durable media. This should be recognized as an operational and regulatory imperative. It needs to be absolutely and demonstrably inviolate in this regard. But if the electronic record is textual and subject to review and approval, the audit trail begins upon the approval of the document.

Retaining the pre-approval iterations in the audit trail is not value added. If an operating procedure, for example, is typed into a word processor (stored to durable media or not) and subsequently routed either in hard copy or electronically for review and approval, it is not versioned until it is approved by all required approvers. The following procedures are imperative:
The document is not used until it has been fully approved and released into the appropriate documentation system.
The document is not released for use until it has, in a post-altered or amended state, all of the required approvals.
The document is maintained via appropriate version control and retention requirements.

With these procedural controls in place, the textual document is not complete and usable until it has been formally approved and released. At this point, the 21 CFR Part 11 required audit trail is applicable. Obviously, the predicate rule drives the need for a document and subsequently the document's approval, versioning and retention requirements. If the predicate rule does not require the retention of the document's draft versions, Part 11 does not apply to draft versions. However, as I write that, I believe that, during the document's iterative draft stages, it is necessary to fully control the draft versions until the document has been approved for use. Upon approving and version controlling the final version, all electronic draft versions of the document can be deleted. An example of this is as follows:
1. An author writes a procedure/report/guideline/etc., and sends a draft copy to five different reviewers/ approvers.
2. Each reviewer/approver makes a change to the draft copy and sends his/her copy comments back to the original author for incorporation into a new draft version of the document.
3. The author then consolidates the comments and sends the document back to the reviewers/approvers as a new and controlled draft version.
4. The new and controlled draft version is approved by the reviewers/approvers, and the document is released as a controlled final version.
5. After the document has been released as a controlled final version, all draft versions can be deleted.
6. If the released document is subsequently revised, the above process is repeated and only the various final approved and released versions are retained. The current approved version is retained in an active status, and previous approved versions are retained in an archival status.

The draft version document described in Step 3 is controlled and saved only until the final version, described in step 4, is approved and version controlled. After the approval of the final document, any versions or copies of the draft document can be deleted.

Agency representatives have differed on the point at which the Part 11 audit trail becomes applicable. The perspectives within the agency have ranged from a very conservative umbrella statement of, "whenever anything is stored to durable media," to the more pragmatic approach previously described for audit trailing textual documents that are not available for use until approved, released, version controlled and retained per predicate rule requirements. With 21 CFR Part 11 requiring an audit trail for human-entered transactions, as opposed to those initiated by machine or computer, and not describing exactly when the audit trail begins, the industry and the FDA must develop a consistent and reasonable approach to resolving this issue.

Compliance Strategy

FDA References
Compliance Policy Guide 7153.7 May 1999

- Nature and extent of deviations

- Effect on product quality and data integrity

- Adequacy and timeliness of corrective actions

- Compliance history (especially data integrity)
Guidance - Computerized Systems in Clinical Trials - 1999

Systems Covered
Inventory all systems

- Proposed

- Current
All proposed systems should comply with Part 11
Determine threshold of risk the company is willing to accept

Plan
Develop plan for compliance of high risk systems with time frames
Demonstrate progress in implementing timetable
Determine what will be done with other systems—support or validate transcription
Document process

SOPs
System setup/installation
Data collection and handling
System maintenance
Data backup, recovery and contingency plans
Security
Change Control

Policies
Systems should clearly identify the electronic version of records as confidential
Any printout of records should be automatically marked as confidential
Establish e-mail and voice mail policies
Inform employees about the legal consequences of certification

Compliance Mission
Many companies have adopted the following Part 11 compliance approach, keeping in mind the following mission statement:

"To develop an action plan for addressing Part 11 requirements in existing systems and to support the preparation and training of business processes and procedures to assure the development, implementation and use of compliant systems in accordance with the FDA regulations."

Compliance Plan
Study and fully understand Part 11. Applies to electronic and handwritten signatures
Ensure that signatures cannot be excised, copied or otherwise transferred to falsify an electronic record
Identify and inventory all of the Part 11-applicable electronic systems
Develop and apply a Part 11 compliance checklist in order to create a Part 11 compliance gap analysis for systems
Develop and apply a systems criticality matrix that can be used to prioritize systems for Part 11 remediation
Develop and execute against a comprehensive Part 11 remediation schedule

In order to determine a remediation path, it is necessary to project accurately the remediation cost of each system. This will include determining whether the most effective course of action is to upgrade the existing system, buy a new system that can be brought into compliance, buy a system that is scheduled to be in compliance, or buy a system that is already in compliance with Part 11.

Interdisciplinary Remediation Planning
When Part 11 remediation plans are being developed, it is essential that Quality Assurance, Regulatory Affairs/Compliance, Operations, and Information Systems personnel are all jointly involved in the planning. The software, equipment and intended use have to be considered at the very outset of planning. Is the record required by a predicate rule? What is the actual application and use of the equipment/ software? What is the criticality of the system? What is the extent of the noncompliance? Can the program be brought into compliance? Is a compliant new system available? These questions are best answered from a multidisciplinary perspective.

Legacy Systems
Part 11 remediation is especially frustrating for older systems that have been validated to other standards and have been operating in an otherwise nonproblematic state. Legacy system remediation presents a unique dilemma because spending a significant amount of time and money to update an older system could appear to be of limited value. However, remaining in noncompliance while new and compliant systems are sought is fraud with regulatory peril and can't be taken lightly. It may be very costly to remediate these systems, but the fact remains that Part 11 does not provide for grandfathering legacy systems, and it does allow the industry to use electronic signatures.

Software and System Suppliers
Software and equipment suppliers have begun to understand that Part 11 represents a new set of expectations for their products, and many are trying to respond, but most are not there yet. It has become apparent that "buyer beware" is a term or concept that is very applicable to Part 11 compliance efforts. In a recent review of several well-known systems/ software packages that were advertised as "Part 11 compliant," it was evident that some aspects of Part 11 were addressed, but others were not. It is imperative that manufacturers understand the requirements of the final rule and are in a position to ask the right questions of their suppliers.

Laboratory Equipment
The remediation approach of replace or upgrade will need to be looked at on a system-by-system basis or at least a system-type basis. Laboratory equipment will need to be assessed after a gap analysis has determined the level of noncompliance. If an analyzer is not designed to store data to durable media, and it holds the analysis in RAM, prints out the analysis results, and subsequently deletes the results from RAM to make way for the next analysis, it is generally interpreted that Part 11 does not apply. The electronic typewriter concept pertains, with the paper copy becoming your raw data, subject to appropriate predicate rule retention requirements.

If an analyzer stores analysis data to durable media, Part 11 applies. The raw data in this case is the electronic data, and any subsequent hard-copy printout of the data is ancillary. The printout must be, as part of the systems validation, demonstrated to be the same as electronic raw data. But the presence of paper copy does not remove the Part 11 requirements that probably represent the easiest and most direct compliance approach. If the analyzer can't be readily upgraded, the new purchase option exists, but the vast majority of new analyzers themselves are noncompliant.

FDA-regulated industry is just one player in the overall laboratory analyzer market, and demands from the industry to make new analyzers Part 11 compliant can be much like the tail trying to wag the dog. The industry believes that, while this can eventually meet with positive results, it is more likely to be met with frustration in the short run.

Discuss your options and be creative and innovative in your remediation approach. If your analyzers can't be made Part 11 compliant, get a laboratory information management system (LIMS) or an external data control system that can. Treat your analyzers as second generation for your LIMS, and assure that your LIMS software is Part 11 compliant. The FDA is not prescriptive relative to where the data is retained, which file or database. You are required to validate your system and to be able to demonstrate that your system and its data acquisition, retention and Part 11 controls are solid and repeatable.

Solutions
Information systems professionals, when introduced to Part 11 requirements, have come up with innovative solutions to the remediation quandary. With Part 11 providing the capacity for audit trailing to be accomplished via the use of ancillary equipment or different databases, the industry has the opportunity to view entire interrelated and interconnected systems looking for the most opportune mechanism to fulfill the various Part 11 requirements. Examples of this are the use of Documentum's underlying Oracle database to record time/date transactions or the use of an NT server's security function to provide the required level of systems security for an application accessed on line utilizing the server.

Many commercially available software programs already have systems self-checks and alert database administrators to prevent entry attempts. Instead of being dismayed by the complexity and all-encompassing nature of 21 CFR Part 11, we need to accept the likelihood that we will probably not find an answer that does it all for every system. We must begin to look opportunistically at the systems, equipment and processes that we already have in place for resolution.

The pharmaceutical industry is actively working to develop plans to address full compliance with Part 11. It has already taken several steps toward adherence to the rule in preparing standards for the development, validation and use of computer systems. The industry has begun to oversee the remediation of business systems, business processes and the development of new business systems used to generate, store and authorize information delivered to the FDA. It will also be used to drive and support the use of good business practices around the development and use of computerized systems. Part 11 will remain with us, and organizations that have delayed remediation are falling further behind the compliance power curve. Investigators are trained on Part 11, FDA 483 citations are being issued, and Part 11 violations are being noted in warning letters. Part 11, whether you like it or not, whether you feel it's needed or not, is a released Final Rule in the Code of Federal Regulations governing our industry and must be adhered to.

It is "foolish" to try to wait it out. You will fall further behind your peers and your competition, and you will put your organization at risk. The industry, working with the FDA, must develop a consistent and reasonable approach to resolving the Part 11 issue. Understand the rule, understand your requirements, and by all means understand your opportunities. Keep track of your plan, your actions and accomplishments, your innovations and solutions, and your remediation expenses.

Packaging Process Validation

Packaging process validation is often supplemented by 100% inspection online. Many firms take the approach that a 100% online inspection is the way to go. Even today, many companies have inspectors set up offline to sort out or rework unacceptably packaged product. Often, process variables are not adequately studied or the process is not observed to “nail it” through process validation. The following approach used by a large pharmaceutical company to validate the blister packaging process may shed some insights on how Design of Experiments (DOE)—prior to packaging validation—can help.

This case study is about an OTC product. The product launch date was set in stone; the marketing managers were even talking about pre-launching the product to select large-scale retailers. The operations team was under tremendous pressure to finish the process validation and pre-launch activities of this OTC product. The product was a coated tablets, the packaging put-up was a carton with three blister cards, each card with eight tablets per card, making it a pack of 24 tablets.

The team consisted of a Packaging Engineer, an Operations Engineer, a Production Manager, a Quality Engineer and a Project Manager. Traditionally, the company validated the packaging process by optimizing the packaging process variables and making three runs. A statistically valid sampling plan would be implemented and sample packages would be tested per the finished product specifications. In most cases, this approach worked. But this was not one of those usual projects.

Let us look into the specifics. The package design required the patient to peel the foil by holding on to a center tab. See Figure 1, which shows an example of the four-way notch at the center tab. Since the product was geared towards the elderly, the package design presented some unique challenges. A trial run was performed and some samples were shown to marketing. While the overall package quality in terms of appearance and integrity was fine, Marketing thought that the package was simply too hard to open.

The team decided to establish optimum packaging process parameters using Design of Experiments (DOE), prior to conducting packaging process validation. In “old school” scientific experimentation, people are used to conducting an OFAT experiment. What that means is that the process is studied by simply by changing One Factor At a Time (OFAT). This method, while successful in some cases, is almost always time consuming, costly, and does not guarantee that all the parameters have been optimized.

The team decided to do the more methodical DOE ap-proach, where one changes multiple parameters at a time to understand the process output.

There are many schools of thoughts and styles for conducting such sophisticated DOE trials. One way is to conduct a “Full Factorial” experiment. That is, the process is run for many trials at all the possible extremes of each variable. Such experiments are essentially an OFAT multiplied many times over. One can collect a large amount of information about the process, but the quality of the information depends on the number of trials one runs at each set-up parameter. Although it may seem counterintuitive, one can design a set of trials that is not a full factorial experiment, and still collect adequate information. The obvious justification for this is resource savings. Here’s a simplified example:

Let's say there are two variables (A & B) that impact product quality. And let’s say that the two extremes of each parameter are defined as + and – signs. This means the process can be run in four possible combinations as follows:

A+ B+
A+ B-
A- B+
A- B-

One can then run the process at each of these settings and collect results. None of these may be optimal, but one can get some information about how the process behaves at these extremes. (The purpose of this article is not to provide an extensive treatment of statistical analysis, but to give a flavor of how experimental trials can be constructed.)

In the present case, before these trials were designed, the team brainstormed on different variables and decided to list all the significant ones. Here is a simplified list of all the potential parameters that would affect the package quality:


Materials:
Process Variables:
Determines the dwell time of the blister card on the sealing plate.
This is measured as the force with which the blister card is formed by combining the PVC with the foil-backing. The force is applied by a plate by a rotating cylinder.
Temperature of the knurling plate — a critical parameter for the overall process can be raised or lowered, but once reached, it remains constant. To get to a different temperature, the line needs to be stopped until the temperature is reached to the new requirement.

One of the significant questions was: What is the one thing that the team is trying to solve? Marketing only gave one clue, that the package should just be easy to open. That is a very broad statement. How can one determine what is easy to open? What may be easy for one person may be difficult for another. There is also the question about technique, of how each person holds the blister card before opening and how one peels the backing. So the team decided to establish a scale of difficulty-to-open. The scale was established as 1 being too difficult, 5 being the best possible, easiest to open without impacting product seal integrity, but even this scale would differ among different people. Finally, the team took some of the samples of the trial runs and had a random group of in-house consumers decide on the technique (per the instruction on the blister card). Once the technique was finalized, about 10 people were asked to peel blister cards and define the difficulty scale. The results were averaged and, with some statistical and some empirical observations, a set of ‘standards’ was created for each notch in the scale (1-5). These ‘standards’ were set aside to be used for comparing the process outputs from each experimental trial.

In technical terms, the process output or the quality para-meter that is checked after running an experimental trial is called a response. When results of each trial are graphed statistically, one gets a ‘response curve,’ which is a sort of continuum that shows the impact of various parameter levels on the response. Within statistical bounds, one can extrapolate or predict the response of a combination of process parameters simply by looking at the graph.

Based on the parameter list, the team decided to set combination of the trials. All the possible combinations are as follows:

Temperature
High
Low
Seal Pressure
High
Low
Line Speed
High
Low

As you can see there are six possible combinations:

Temperature Seal Strength Line Speed
High
High
High
Low
Low
Low
High
Low
Low
High
Low
High
Low
High
High
Low
High
High

Return to top

Essentially it is a 3! (Factorial) experiment i.e. 3 x 2 x 1 possible combinations. But is running this one set of trials enough? A statistician will tell you no. One can get some information by running these six trials, but one cannot have a high level of statistical confidence in the results. The team decided to run all six of these trials in a random order. Each set of trials was run three times, for a total of 18 trials. The number of trials was decided after a statistical review and a formal cost-benefit analysis performed by the operations team and the Quality Engineer. From each trial, a set of about 100 blister cards were sampled. About five people ‘opened’ these 100 samples from each trial and rated the difficulty to open on a scale of 1-5. These results were statistically processed to calculate their averages and variance. The results were tabulated and statistically graphed.

These three variables, we note, are continuous, by which we mean that one can run the process having the variable at any point within the extremes. There are also categorical variables, which can only be run at a set level. For the purpose of the experiment, the process was run at only high and low levels, but the response curve can show with graphical detail how the process will behave at intermediate points.

This approach—conducting a DOE prior to committing to a full-scale packaging process validation—can save a lot of problems down the road and also improve process understanding.

Some readers may think that this is way too much of a hassle to go through before process validation activities, and one might even argue that this process should have been optimized much earlier. Management is always driving to reduce costs and restrict resources. All the line time, material and support personnel costs add up to a hefty bill. How can one convince management that this is a worthy project? The short answer is that everyone must be engaged. In terms of the overall project, it is truly the Project Manager’s role to challenge the team about minimizing costs and assuring success. This team had a competent Project Manager who did not baby-sit the project but held regular formal and informal meetings on an individual and team basis. He would visit the production line at odd hours to see how he could smooth out any administrative or resource issues. Additionally, this project was chartered formally with all the checks and balances. The costs of doing the project were clearly offset by the costs of not doing the project. Marketing made the case of cost of product complaints, loss of revenue, and competitor’s advantages. When the numbers were charted and compared, Senior Management did not hesitate and gave a go-ahead. Contingencies about launch delays were also planned, to the dismay of Marketing. The team members managed expectations quite well. Every department head was fully engaged. The project manager updated the progress on bulletin boards.

But still the question remains, why did it come down to the eleventh hour to start this project? The problem was that, when the product was originally designed, no one took into account the impact of various process variables on the ‘ease of use’ aspect. This customer requirement figured in late in the game, when marketing actually tried to open the package. One of the major lessons learned was to have all customer requirements figured out. In this case, the dimensional and other quality aspects of the blister card — such as appearance and seal integrity, were established, but the ergonomic issue was not captured.

Conducting DOE is not an easy thing. Running the trials and tabulating the results may actually be quite fun, but before one goes about conducting these experiments, a lot needs to be thought out. Thinking this through requires a lot of technical/process expertise and statistical knowledge. A DOE project will require the experimenter to make a set of assumptions. Failure to make the right assumptions can potentially fail the experiments. For example, if an important variable is ignored and not included in the trials, the results may show a set of optimal parameters that won’t work in real life. One can also be blinded by making strictly intuitive ssumptions; the whole idea of experimentation is to provide a laboratory setting. To that end, a good Quality Engineer with a solid statistical understanding can help set the right course.

The reader must be wondering by now about the results of the trials that we conducted. Well, the results showed with graphical clarity and with a high level of statistical confidence that temperature was not a huge factor, but line speed and the seal strength had significant impact on ‘ease-of-use’ response. Additional confirmation trials were run to prove that the optimized setting do in fact produce predictably good product. After that, packaging validation was a cinch. The team was applauded for its hard work and the product launched on time.

We have simplified a lot of information to make this article free from statistical jargon, as it was far beyond its scope. But the lessons of this story are worth noting:

Packaging process validation is not just a regulatory compliance exercise; rather, it is a customer-centric activity.
Data-based decision making saves time and improves the chances of a successful validation.
Design of Experiments can be a very powerful tool to understand your process and to predict effect of various variables on the process outputs.
Projects must be chartered formally to assure success. Team members must be selected carefully and the Project Manager must keep the project moving.
Senior Management must fully trust the team and provide the agreed upon resources.
Contingencies for failure must be planned and what-if scenarios must be fully understood.

It can be said that a competent, motivated team, a worthy project, and sound management can solve any packaging validation problem. END


:
While a full factorial experiment was conducted in this case study, there are many statistically sound ways of conducting experiments. For example, there are many ways to conduct partial factorial experiments. One can also study the impact of several parameters by conducting screening experiments. The reader is encouraged to study the subject. One recommended book to get a feel for the subject is The Experimenter’s Companion by Richard B. Clements – ASQ Press

Pharmaceutical Validation Documentation Requirements

Pharmaceutical validation is a critical process that ensures that pharmaceutical products meet the desired quality standards and are safe fo...