Friday, June 19, 2009

Packaging Process Validation

Packaging process validation is often supplemented by 100% inspection online. Many firms take the approach that a 100% online inspection is the way to go. Even today, many companies have inspectors set up offline to sort out or rework unacceptably packaged product. Often, process variables are not adequately studied or the process is not observed to “nail it” through process validation. The following approach used by a large pharmaceutical company to validate the blister packaging process may shed some insights on how Design of Experiments (DOE)—prior to packaging validation—can help.

This case study is about an OTC product. The product launch date was set in stone; the marketing managers were even talking about pre-launching the product to select large-scale retailers. The operations team was under tremendous pressure to finish the process validation and pre-launch activities of this OTC product. The product was a coated tablets, the packaging put-up was a carton with three blister cards, each card with eight tablets per card, making it a pack of 24 tablets.

The team consisted of a Packaging Engineer, an Operations Engineer, a Production Manager, a Quality Engineer and a Project Manager. Traditionally, the company validated the packaging process by optimizing the packaging process variables and making three runs. A statistically valid sampling plan would be implemented and sample packages would be tested per the finished product specifications. In most cases, this approach worked. But this was not one of those usual projects.

Let us look into the specifics. The package design required the patient to peel the foil by holding on to a center tab. See Figure 1, which shows an example of the four-way notch at the center tab. Since the product was geared towards the elderly, the package design presented some unique challenges. A trial run was performed and some samples were shown to marketing. While the overall package quality in terms of appearance and integrity was fine, Marketing thought that the package was simply too hard to open.

The team decided to establish optimum packaging process parameters using Design of Experiments (DOE), prior to conducting packaging process validation. In “old school” scientific experimentation, people are used to conducting an OFAT experiment. What that means is that the process is studied by simply by changing One Factor At a Time (OFAT). This method, while successful in some cases, is almost always time consuming, costly, and does not guarantee that all the parameters have been optimized.

The team decided to do the more methodical DOE ap-proach, where one changes multiple parameters at a time to understand the process output.

There are many schools of thoughts and styles for conducting such sophisticated DOE trials. One way is to conduct a “Full Factorial” experiment. That is, the process is run for many trials at all the possible extremes of each variable. Such experiments are essentially an OFAT multiplied many times over. One can collect a large amount of information about the process, but the quality of the information depends on the number of trials one runs at each set-up parameter. Although it may seem counterintuitive, one can design a set of trials that is not a full factorial experiment, and still collect adequate information. The obvious justification for this is resource savings. Here’s a simplified example:

Let's say there are two variables (A & B) that impact product quality. And let’s say that the two extremes of each parameter are defined as + and – signs. This means the process can be run in four possible combinations as follows:

A+ B+
A+ B-
A- B+
A- B-

One can then run the process at each of these settings and collect results. None of these may be optimal, but one can get some information about how the process behaves at these extremes. (The purpose of this article is not to provide an extensive treatment of statistical analysis, but to give a flavor of how experimental trials can be constructed.)

In the present case, before these trials were designed, the team brainstormed on different variables and decided to list all the significant ones. Here is a simplified list of all the potential parameters that would affect the package quality:


Materials:
Process Variables:
Determines the dwell time of the blister card on the sealing plate.
This is measured as the force with which the blister card is formed by combining the PVC with the foil-backing. The force is applied by a plate by a rotating cylinder.
Temperature of the knurling plate — a critical parameter for the overall process can be raised or lowered, but once reached, it remains constant. To get to a different temperature, the line needs to be stopped until the temperature is reached to the new requirement.

One of the significant questions was: What is the one thing that the team is trying to solve? Marketing only gave one clue, that the package should just be easy to open. That is a very broad statement. How can one determine what is easy to open? What may be easy for one person may be difficult for another. There is also the question about technique, of how each person holds the blister card before opening and how one peels the backing. So the team decided to establish a scale of difficulty-to-open. The scale was established as 1 being too difficult, 5 being the best possible, easiest to open without impacting product seal integrity, but even this scale would differ among different people. Finally, the team took some of the samples of the trial runs and had a random group of in-house consumers decide on the technique (per the instruction on the blister card). Once the technique was finalized, about 10 people were asked to peel blister cards and define the difficulty scale. The results were averaged and, with some statistical and some empirical observations, a set of ‘standards’ was created for each notch in the scale (1-5). These ‘standards’ were set aside to be used for comparing the process outputs from each experimental trial.

In technical terms, the process output or the quality para-meter that is checked after running an experimental trial is called a response. When results of each trial are graphed statistically, one gets a ‘response curve,’ which is a sort of continuum that shows the impact of various parameter levels on the response. Within statistical bounds, one can extrapolate or predict the response of a combination of process parameters simply by looking at the graph.

Based on the parameter list, the team decided to set combination of the trials. All the possible combinations are as follows:

Temperature
High
Low
Seal Pressure
High
Low
Line Speed
High
Low

As you can see there are six possible combinations:

Temperature Seal Strength Line Speed
High
High
High
Low
Low
Low
High
Low
Low
High
Low
High
Low
High
High
Low
High
High

Return to top

Essentially it is a 3! (Factorial) experiment i.e. 3 x 2 x 1 possible combinations. But is running this one set of trials enough? A statistician will tell you no. One can get some information by running these six trials, but one cannot have a high level of statistical confidence in the results. The team decided to run all six of these trials in a random order. Each set of trials was run three times, for a total of 18 trials. The number of trials was decided after a statistical review and a formal cost-benefit analysis performed by the operations team and the Quality Engineer. From each trial, a set of about 100 blister cards were sampled. About five people ‘opened’ these 100 samples from each trial and rated the difficulty to open on a scale of 1-5. These results were statistically processed to calculate their averages and variance. The results were tabulated and statistically graphed.

These three variables, we note, are continuous, by which we mean that one can run the process having the variable at any point within the extremes. There are also categorical variables, which can only be run at a set level. For the purpose of the experiment, the process was run at only high and low levels, but the response curve can show with graphical detail how the process will behave at intermediate points.

This approach—conducting a DOE prior to committing to a full-scale packaging process validation—can save a lot of problems down the road and also improve process understanding.

Some readers may think that this is way too much of a hassle to go through before process validation activities, and one might even argue that this process should have been optimized much earlier. Management is always driving to reduce costs and restrict resources. All the line time, material and support personnel costs add up to a hefty bill. How can one convince management that this is a worthy project? The short answer is that everyone must be engaged. In terms of the overall project, it is truly the Project Manager’s role to challenge the team about minimizing costs and assuring success. This team had a competent Project Manager who did not baby-sit the project but held regular formal and informal meetings on an individual and team basis. He would visit the production line at odd hours to see how he could smooth out any administrative or resource issues. Additionally, this project was chartered formally with all the checks and balances. The costs of doing the project were clearly offset by the costs of not doing the project. Marketing made the case of cost of product complaints, loss of revenue, and competitor’s advantages. When the numbers were charted and compared, Senior Management did not hesitate and gave a go-ahead. Contingencies about launch delays were also planned, to the dismay of Marketing. The team members managed expectations quite well. Every department head was fully engaged. The project manager updated the progress on bulletin boards.

But still the question remains, why did it come down to the eleventh hour to start this project? The problem was that, when the product was originally designed, no one took into account the impact of various process variables on the ‘ease of use’ aspect. This customer requirement figured in late in the game, when marketing actually tried to open the package. One of the major lessons learned was to have all customer requirements figured out. In this case, the dimensional and other quality aspects of the blister card — such as appearance and seal integrity, were established, but the ergonomic issue was not captured.

Conducting DOE is not an easy thing. Running the trials and tabulating the results may actually be quite fun, but before one goes about conducting these experiments, a lot needs to be thought out. Thinking this through requires a lot of technical/process expertise and statistical knowledge. A DOE project will require the experimenter to make a set of assumptions. Failure to make the right assumptions can potentially fail the experiments. For example, if an important variable is ignored and not included in the trials, the results may show a set of optimal parameters that won’t work in real life. One can also be blinded by making strictly intuitive ssumptions; the whole idea of experimentation is to provide a laboratory setting. To that end, a good Quality Engineer with a solid statistical understanding can help set the right course.

The reader must be wondering by now about the results of the trials that we conducted. Well, the results showed with graphical clarity and with a high level of statistical confidence that temperature was not a huge factor, but line speed and the seal strength had significant impact on ‘ease-of-use’ response. Additional confirmation trials were run to prove that the optimized setting do in fact produce predictably good product. After that, packaging validation was a cinch. The team was applauded for its hard work and the product launched on time.

We have simplified a lot of information to make this article free from statistical jargon, as it was far beyond its scope. But the lessons of this story are worth noting:

Packaging process validation is not just a regulatory compliance exercise; rather, it is a customer-centric activity.
Data-based decision making saves time and improves the chances of a successful validation.
Design of Experiments can be a very powerful tool to understand your process and to predict effect of various variables on the process outputs.
Projects must be chartered formally to assure success. Team members must be selected carefully and the Project Manager must keep the project moving.
Senior Management must fully trust the team and provide the agreed upon resources.
Contingencies for failure must be planned and what-if scenarios must be fully understood.

It can be said that a competent, motivated team, a worthy project, and sound management can solve any packaging validation problem. END


:
While a full factorial experiment was conducted in this case study, there are many statistically sound ways of conducting experiments. For example, there are many ways to conduct partial factorial experiments. One can also study the impact of several parameters by conducting screening experiments. The reader is encouraged to study the subject. One recommended book to get a feel for the subject is The Experimenter’s Companion by Richard B. Clements – ASQ Press

1 comment:

Joch said...

The Complete DOE should have 8 trials and not six.
The last of the six trials presented in the table should be corrected to: Low High Low.
Jose Chvaicer

Pharmaceutical Validation Documentation Requirements

Pharmaceutical validation is a critical process that ensures that pharmaceutical products meet the desired quality standards and are safe fo...