Tuesday, December 1, 2009

Validation Requirements

This article is the third in a series of articles on compliance with the electronic signatures/electronic records rule found in 21CFR11. The previous articles summarized the requirements of Part 11 and specifically looked at the high-level, policy SOPs necessary to satisfy the validation requirement as well as requirements and design documentation.

This article will also focus on the validation requirement of Part 11. Here we will look at the specific elements necessary to meet the validation requirements for each computer system for test documentation.


Test Documentation
Following a System Life Cycle model, test documentation is developed based on the requirements and system functionality as defined in the system requirements documentation. The test documentation is used to verify that each requirement and function defined in the requirements documentation has been implemented in the design of the system.

Test documentation covers both hardware and software. The testing of the system is generally divided into three sections: Installation testing verifies that the physical installation of the system meets the defined requirements; Operations testing verifies that the system performs the defined system functionality; Performance testing verifies that the system will operate at the extreme boundaries of limits of the requirements and functionality, i.e., maximum volume and system stress. Installation testing and some form of operational testing are performed on all systems, while Performance testing is used for scalable systems and custom (categories 4 and 5) systems.

First, we must discuss the guidelines for the development of test documentation, defining how to write test cases and what tests are required for each complexity category model. Then, we will define how to document the execution of the test cases.

The guidelines presented in this section for the development of testing documentation shall follow the complexity model presented in Table 1 (p. 54).

Regardless of the complexity of the system, the following sections are required in any testing document.

Header and Footer: The header and footer should follow the format generally used in company SOP documents. For example, headers usually contain standard elements (company name), document type ("Requirements Document," for example), and title of the document ( "Requirements Documentation for LIMS version 2.5.") The footer usually contains at a minimum the page number ("Page 5 of 35.")

Approvals: All requirements documents shall have an approvals section where responsible persons will sign. I would suggest the following persons: the author of the document, the computer systems validation specialist and a member from IT.

Introduction: The introduction is a brief statement of what the system is, what the system does and where it is to be located.

Ownership: The ownership section is a statement identifying the owner of the system. The statement of ownership is written in a generic sense, identifying the department that owns the system and the department manager as the responsible person.

Overview: The overview is a detailed description of the system, indicating what the system is expected to do and how it fits into the operation of the department. If the system described in the requirements document is a replacement for an existing system, a brief description of the current system should be included. Enough detail should be included in this overview to give the reader an understanding of the system and what it does without going into discrete functions.


General Instructions to Testers
This section defines the procedures and documentation practices to be followed when executing the test cases. A further explanation of these procedures and practices are presented later in this document.

Discrepancy Reports: This section defines the procedures to be followed when the actual results of the test do not match the expected results.

Signature Log: This section is used to identify all personnel participating in the execution of the test cases. The person’s printed name, full signature and initials are recorded at the beginning of the execution of the test cases.

References: This section identifies the appropriate reference documents pertinent to the execution of the test cases. These documents shall include the SOP, the appropriate requirements documentation, and the appropriate design documentation.

Prerequisite Documentation: This section lists the validation documentation such as the requirements documentation and the design documentation that is to be in place before the execution of the test cases. The first test case shall be verification that these prerequisites have been met.

Test Equipment Log: This section is a log in which all calibrated test and measuring equipment used during the execution of the test cases is logged.

Test Cases: This section presents the test cases used to verify that the requirements and functions defined in the requirements documentation have been met. Each test case shall test one requirement or function of the system. In some cases, several closely related requirements or functions might be verified in one test case. Test cases shall be written in a landscape page setup using a table format and include the following elements:

• Objective – This is a brief statement indicating what requirement, function or module of the system the test case is intended to verify.

• System Prerequisite – This section describes any previously inputted data or other system status that must be in place to properly execute the test case. For example, when testing system security, users of various security levels may need to be in the system in order to test security levels at login.

• Input Specifications – This section defines any specific input data required to execute the test other than keystroke entries. This may include instrument test data files, barcode files, etc. A specific data file may be identified or the file identified may be generic.

• Output Specifications – This section defines the expected output of the test case other than output to the monitor. The output may be identified as reports, data files, etc.

• Special Procedural Requirements – This section details any special considerations that may be required to successfully execute the test case.

• Test Procedure – The test procedure is setup in a table format with the following column headings:


Procedural Steps – This is a systematic series of instructions to be followed in the execution of the test. These steps should be sufficiently detailed as to allow the test to be duplicated by any qualified personnel without changing the outcome of the test.

Expected Result – For each procedural step, the expected outcome of that step should be defined. The defined outcome should be detailed enough to allow an unequivocal determination of the pass/fail status of the step.

Actual Result – This column is to be left blank and completed by the person executing the test when the step is executed. The actual result of the step is recorded at the time the test is executed.

Pass/Fail – This column is used to record the Pass/Fail status of the test by comparing the actual result to the expected result.

Tester Initials and Date

• Comments – This section is complete following execution of the test case and is used to record any discrepancies or details not captured elsewhere in the test script.

• Test Conclusion – This is an indication of the Pass/Fail status of the test case.

• Tester’s Signature – This section records the signature of the person executing the test case and date.

• Verification Signature – This section re-cords the signature of the person verify ing the test results and date.


Requirements Traceability
Each requirement and function defined or described in the requirements documentation shall be reflected by one or more test cases. Following completion of the test documentation, the requirements documentation shall be reviewed to ensure that each requirement is reflected in the test documentation. The section number of the test case that fulfils the requirement shall be recorded in the second cell of the table in the right margin of the page. This provides a cross-reference between the requirement and the test case and ensures that the requirements are being completely verified.

As requirements of the system change, test cases that no longer have an active requirement shall be voided, not deleted. To void a test case, delete the test but not the section number, then enter "VOID" in place of the text. This prevents sections from being renumbered after a deletion and invalidating the references in the requirements document. This also eliminates the potential for confusion caused by re-assigning a section number previously used for an unrelated design element.


Considerations for Developing Test Cases
Installation Testing
Hardware Installation Testing – Hardware installation test documentation provides a high degree of assurance that the hardware has been installed according to the vendor’s specifications and configured in accordance with the requirements and design documentation for the system. This may include access space, power supply/UPS, network communications, interconnecting wiring/ cabling, ambient temperature, ambient relative humidity and peripheral systems connections.

Software Installation Testing – Software installation test documentation also provides a high degree of assurance that the software has been installed according to the vendor’s specifications and configured in accordance with the requirements and design documentation for the system. Hardware installation testing must be performed before installation and testing software. Software installation testing applies to all software components that are a part of the system including operating system software, network and communications software, OEM software and custom software. It should also include virus checking, verification of required drive space, RAM space and drive configuration, software version numbers, security access for installation and operation, directory configuration, path modifications and any system parameter configuration.


Operational Testing
Operational test documentation provides documented evidence that the system fulfils the requirements and functions as defined on the requirements documentation. Each function described in the requirements documentation shall be tested independently.

When a function is defined with a specified range of inputs or range for a data field, that function shall be tested at each end of the range. For example, if the range of acceptable inputs is 1 to 5, the test case shall challenge the functions using inputs of 0, 1, 5 and 6, with 0 and 6 expected to fail.

Test cases shall also test to verify that illogical inputs are not accepted. For example, test cases shall challenge a date function with non-date inputs, or a numeric field with non-numeric characters.

For COTS applications, many times the vendor will supply the test cases. These may be used in lieu of test cases developed in-house. There shall be a documented review of the test cases provided by the vendor relative to the functional documentation to ensure that the vendor test cases are sufficient. For applications where vendor-supplied test cases are used, requirements for the various functions of the application shall not be required, provided there is documented evidence that the vendor has followed a validation methodology.


Performance Testing
Performance testing ensures that the system shall stand up to daily use. Test cases during performance testing shall verify that the system can function properly during times of high user input and high data throughput.

Performance testing is not always applicable. For systems with only a single user, stress on the system is inherently limited. Performance testing is usually executed on complex systems with multiple inputs and outputs as well as network-based systems.


Test Execution
Before the start of testing, all personnel participating in the execution shall enter their names, signatures and initials in the signature log of the test documentation.

Once execution of a test case is started, that test case must be completed before moving to the next test case. An exception to this would be if the test case fails, the failure is noted and the next test case is started. Execution of the entire set of test cases does not have to be completed in one sitting, though once testing begins, the systems may not be altered. Any failed tests shall be recorded and testing shall continue unless the failure prevents continuation. If testing must be discontinued in order to correct any issues in the system, then all tests must be re-executed.

As test cases are executed, information shall be recorded neatly and legibly in black ink. Sign-off and dating of the test case shall occur on the day that the test case was executed. Mistakes made during execution of the test cases shall be corrected using a single strikethrough, so as not to obscure the original data. Any corrections made in this manner shall be initialed and dated by the person making the correction.

Errors in the test script shall be corrected using a single strikethrough so as not to obscure the original data. Any corrections made in this manner shall be initialed and dated by the person making the correction. Any corrections made to the test case during execution shall be justified in the comments section of the test case.

Completed test forms shall not contain any blank spots where information might be entered. If an item does not apply to the current test, the tester should fill in a ‘N/A’ followed by an initial and date. Completed tests are reviewed and approved by the Computer Systems Validation Specialist, or designee, who signs and dates the bottom of each approved test page.

Test Failures and Discrepancy Reports
Results of tests that do not match the expected results shall be considered failures. The failure of a single step of the test case shall force the failure of the test case. However, if a step in the test case fails, execution of the test case shall continue unless the failure prevents completion of the test case.

Errors in the test case that would result in test case failure if left alone may be corrected as noted above. This correction must be justified in the comments section of the test case. Steps in the test case where the expected result is in error shall not be considered test failures if corrected. Test failures shall be noted in the test case and documented on a test discrepancy form. The form is then logged to facilitate tracking of errors during system testing.

Upon resolution of the failure, the cause of the failure is examined with respect to the failed test case and any similar test cases. All test cases associated with, or similar to, the resolved failed test case shall be reviewed to determine the extent of re-testing required. This re-testing, commonly referred to as regression testing, shall verify that the resolution of the failed test has not created adverse effects on areas of the system already tested. The analysis of the testing shall be documented to justify the extent of the regression testing.

Regression testing shall be executed using new copies of the original test script to ensure that the requirement of the system is still being met. In some instances, resolution of the failed test requires extensive redevelopment of the system. In these cases, a new test case must be developed. In either case, the failed test shall be annotated to indicate the tests executed to demonstrate resolution. The additional tests shall be attached to the discrepancy report. This provides a paper trail from the failed test case, to the discrepancy report and finally to the repeated test cases.

Test Documentation by Complexity Model

Test documentation varies with complexity category. Com-plexity categories are defined in Table 1 above.

Category 1 – Operating Systems: Category 1 systems are local operating systems and network systems. These systems provide the backbone needed by all other systems to operate. Due to widespread use of these applications, they do not need to be tested directly. As these systems are the base of other applications, they are indirectly tested during the testing of other applications.

Category 2 – Smart Instruments: Category 2 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 2 systems. Installation and Operational Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system. Performance Testing is not required.

Category 3 – COTS Applications: All functions and operations embedded by the manufacturer of the COTS do not need to be tested. Only those functions and operations used by the applications developed in-house require testing.

Category 3 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 3 systems. Installation and Operational Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system. Performance Testing is not required.

Category 4 – Configurable Software Systems: Category 4 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 2 systems. Installation and Operational Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system. Performance Testing should be considered if the system shares data on a network.

Category 5 – Fully Custom Systems: Category 5 systems shall be tested following the requirements and functions listed in the requirements documentation. All sections of the testing document as defined in the section "General Consideration for Developing Test Cases" listed above shall be included in the test document for Category 5 systems. Installation, Opera-tional Testing and Performance Testing shall be executed. The complexity of the test cases should be commensurate with the complexity of the system.


Following a System Life Cycle model, test documentation verifies that all requirements have been properly met in the design phase of software development. Future articles will follow the SLC model into the next phase of software validation: change control.

No comments:

Pharmaceutical Validation Documentation Requirements

Pharmaceutical validation is a critical process that ensures that pharmaceutical products meet the desired quality standards and are safe fo...