What Lab Managers Should Know About Managing and Reviewing Orphan Chromatographic Data in Light of the FDA’s Quality Metrics Initiative

Pharmaceutical industry laboratory managers have likely been following the FDA’s Quality Metrics Initiative, which aims to prevent and mitigate drug shortages and encourage manufacturers to adopt state-of-the-art, innovative quality management systems. The FDA’s Revision 1 guidance issued in November of 20161 will require drug product and active pharmaceutical ingredient (API) manufacturers to report on a new set of quality metrics. Despite industry concerns regarding the burden of the data aggregation processes and uncertainty around potential benefits to industry, the FDA decided to move forward with the initiative.

The FDA proposed the quality metrics, or quality indicators, after working with industry groups, including the International Society for Pharmaceutical Engineering (ISPE). This set of metrics will allow the FDA to measure and assess the year-over-year quality of medicines in the U.S. supply chain. The data will also allow the FDA to determine the overall effectiveness of individual pharmaceutical companies’ quality management systems and generate a database of information that will inform the FDA’s inspection schedule. The hope is that the FDA will be able to exercise more flexibility over its site inspection frequency, relying on the data to dictate its inspection schedule. This could mean fewer or more frequent inspections depending on the quality performance of a particular manufacturing site. This is in line with the FDA’s risk-based monitoring goals and will allow the organization to be more efficient with its staff and meet its mandate to protect public health.

Why three quality metrics?

During the second pilot phase of the initiative, different industry groups, including the ISPE, evaluated approximately 20 potential quality indicators currently used by pharmaceutical manufacturers. The FDA chose the three indicators (Table 1) that most closely correlated with pharmaceutical companies’ overall quality culture, as assessed through detailed interviews.

Table 1 – Indicators that correlate with quality culture of pharmaceutical companies

Invalidated out-of-specification rate quality metric and orphan data

Pharmaceutical ingredients and finished drug products undergo a battery of tests at all stages of drug development and manufacturing. Liquid chromatography and chromatography data management software play an important role in the creation, collection, and reporting of test results used in quality decisions. A suitable chromatography data system (CDS) is essential for adherence to 21 CFR Part 11, Annex 11, or other electronic recordkeeping guidelines and for ensuring data integrity. Such a CDS records every action taken and every data point captured about a sample analyzed by LC. This includes time- and date-stamps for the creation, modification, or deletion of data; analytical and postrun processing methods; and the identity of the analyst, data reviewer, or any other user interacting with the record.

Regardless of whether a sample passes a lot release test, a CDS captures all details of the analysis, including all the versions of the methods and all the interim results, which may include both in-specification and out-of-specification (OOS) test results. As defined by the FDA, an OOS result is “any test result that falls outside the specifications or acceptance criteria established in drug applications, drug master files (DMFs), official compendia, or by the manufacturer. The term also applies to all in-process laboratory tests that are outside of established specifications.” 2

Errors can occur during the manufacturing or packing process and contribute to an OOS outcome. Such errors include inconsistencies in runs and/or issues with mixing and compounding. However, measurement process failures, including issues with reference standards, columns, and solvent preparation, as well as power outages, instrument malfunction, and human error, are often the source of an OOS result. When any OOS result is obtained, FDA regulations first require a thorough, scientifically sound, and documented investigation to identify and, if possible, address the source of the error. Only after an OOS result is properly identified as a testing error with a defined root cause can the result be scientifically invalidated and excluded from further decision-making. At this point only, the sample can be retested according to the laboratory’s own standard operating procedures (SOPs). This retesting procedure might require testing multiple samples, multiple preparations, on multiple instruments and/or by different analysts.

It is important to note that when a measurement error occurs, a lab instrument may still generate a complete or incomplete set of data or records. These data sets, which typically never find their way into a report, are often referred to as “orphan data.” Understanding the story of orphan data or records may be essential in addressing concerns about data integrity and should not be disregarded. Typically, regulators will ask the following question: “Does this orphan data contain any results that may have indicated an out-of-specification (OOS) or out-of-trend (OOT) value?” However, this question can introduce bias. An alternative approach might be to consider the following: In cases of a failed system suitability test or where the data cannot be accepted for other reasons, should the analyst be permitted to continue collecting and processing the data to completion, simply to determine whether there may be OOS or OOT results in that invalidated set?

The input values for the invalidated out-of-specification rate (IOOSR), once the right kinds of test are established, include three contributing metrics:

  1. Sum of all release and stability tests
  2. Sum of release test and stability test OOS results
  3. Sum of release test and stability test OOS results where the source of the OOS result is identified as an aberration of the measurement process.

For example, one may be reporting 91 IOOS results out of a total of 100 OOS results observed prior to a lab investigation from a sum total of 20,000 tests. This scenario would indicate that, despite 91% of OOS results assigned to a laboratory aberration as a root cause, the overall true product failures were only nine out of 20,000 tests. While this metric may indicate the failure of laboratory measurement processes where an OOS result was indicated, it completely ignores laboratory results that need to be scientifically invalidated, where there is no OOS result indicated. The IOOSR metric may simply indicate that the analytical method is especially complicated and difficult to perform correctly the first time.

Fewer orphan files, fewer internal investigations

Addressing mistakes in the measurement process may legitimately result in the creation of data that cannot be accepted as accurate. For example, test results created from an instrument that is not yet equilibrated and fails system suitability testing need to be repeated regardless of whether the original test results show that the sample is in- or out-of-specification. Errors in critical metadata entry (such as sample weights, dilutions, or standard concentrations that are frequently entered manually) need to be noticed, corrected, and the results reevaluated, potentially resulting in multiple versions of results.

Reprocessing and reintegrating chromatograms where the initial integration is incorrect is another valid reason multiple versions of the same results exist. Each version becomes a kind of orphan data record (Figure 1)—yet documenting, investigating, and repeating measurements for invalid results, regardless of the actual test values, should be a transparent process performed by educated, trained, and trusted scientific staff.

Figure 1 – In an ideal world, every test will be executed correctly and accurately—the first time every time—creating just one result to report. In a typical chromatographic data creation process, correcting data entry errors and optimizing integration parameters may be required, and tests may also need to be repeated. This figure shows all of the data that might be created through the normal testing process, but potentially may be considered indicative of “testing into specification,” such as unofficial pretesting of data. The initial sample set and repeat sample sets and all result versions should be saved, even if they are not reported, and may need to be officially invalidated. This allows analysts to address requests by lab managers, and regulators may have to oversee the data creation process.

The focus on this IOOSR metric, where the invalidation rate only includes results indicating an OOS result, can lead to inaccurate conclusions regarding quality. That is because some tests may inherently create more orphan data than other techniques, especially when they rely heavily on the skills of the laboratory staff in the creation and interpretation of results and, consequently, are subject to human error.

An alternative metric might be to compare the proposed IOOSR to the invalidated in-specification rate (IISR) to get to a total invalidated rate as an indication of how often data is invalidated regardless of the outcome (Figure 2). This would potentially uncover if there is any bias to invalidate OOS or OOT data more frequently than in-specification data.

Figure 2 – Rather than focusing on the rate of scientific invalidation of only those results that indicate an OOS, comparing this rate to a total invalidation rate by including documentation of the IOOSR might highlight more quality concerns and uncover any bias in the invalidation process.

To reduce the total invalidation rate, a number of laboratories are initiating a more “phase gate” approach to laboratory tests. Laboratories are taking this approach to limit the number of test results generated that may need to be subsequently invalidated. Such a phased approach to laboratory testing might be particularly useful for chromatographic testing. Because of the sensitivity of the LC technique to system equilibration, it is considered good laboratory practice to first ensure the instrument is ready by the correct use of system readiness checks. These should be performed using independent solutions, not actual samples, unless it is a well-characterized secondary standard (see the FDA Draft Guidance for further direction on how to perform system readiness checks, also known as equilibration/test or trial injections3). If the system is properly equilibrated, analysts should then run, in isolation, initial system suitability analyses. Laboratories should discourage running analyses or processing raw data if the instrument or method is determined to be faulty, inaccurate, or simply not yet equilibrated. Only if the instrument passes both the system readiness check and the system suitability check should samples be submitted onto the chromatographs.

A concerning trend in laboratories is the elimination of system readiness checks because of erroneous concerns about trial injections/test injections/equilibration injections that have featured in FDA warning letters (where actual sample pretesting has been passed off as test injections). However, correct use of system readiness checks with independent test solutions provides an important opportunity to address errors in advance of running the samples. There is a concerning trend for SOPs to be written so that system readiness checks are prohibited altogether or are always submitted as part of a sequence. For expediency, system suitability and sample analyses (with “in sequence” system suitability checks) are traditionally submitted together for overnight runs. However, if the system suitability tests fail, this could create the need to invalidate an entire night’s data.

When the results of an entire run, including sample tests, are rejected, it may increase suspicions during an audit about testing into compliance. (Analysts have been accused of deliberately failing system suitability tests for analyses that might not meet specification.) With a properly written SOP, if the system fails a readiness or suitability check before samples are collected, or, at a minimum, before any processing of data and creation of detailed knowledge of the outcome, then the decision to invalidate a result is done in an unbiased manner irrespective of the test result.

In an attempt to limit the creation of orphan data, laboratory managers may be contributing to the problem of test aberrations and the need to invalidate an even higher percentage of chromatograms by, for instance, not requiring staff to fully equilibrate liquid chromatographs, prepare them for accurate analyses, and test that readiness. Other practices, such as performance metrics related to batches passed, may also encourage laboratory staff to hide or ignore undesirable data that should be documented in a transparent way.

Striving for transparency over expediency

It is clear that any use of quality metrics, whether for internal quality system assessment and process improvement or as a means of measuring and reporting overall quality to outside agencies, should be scientific, relevant to the quality of the product, and, most importantly, implemented within an overall culture of quality.

However, as with any kind of metric, is it imperative that the measurement does not drive unintended behavior, decrease transparency of errors in the lab, or hold analytical tests to an unrealistic standard. Encouraging analytical staff to try to hide those errors, or discouraging accurate peak integration because redoing it would create more orphan data records, is counterproductive to improving product quality.

In its white paper entitled, “FDA Pharmaceutical Quality Oversight—One Quality Voice,”4 the FDA promoted a vision of a “maximally efficient, agile flexible manufacturing sector that reliably produces high quality drug products without extensive oversight.” In some quarters of the industry, there remains a belief that the FDA’s Quality Metric Initiative and its focus on “right first time” test results will result in a stricter degree of oversight, specifically around IOOSR, and drive out efficiency, along with risk-based agility and flexibility, from the quality measurement process.

References

  1. https://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm455957.pdf
  2. https://www.fda.gov/downloads/drugs/guidances/ucm070287.pdf
  3. https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM495891.pdf
  4. https://www.fda.gov/downloads/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/CDER/UCM442666.pdf

Heather Longden is senior marketing manager, Informatics and Regulatory Compliance, Waters Corporation, 34 Maple St., Milford, MA 01757, U.S.A.; tel.: 508-478-2000; e-mail: [email protected]www.waters.com