Pre-Study Validated Is Not the Same As Validated

A Perspective on Evidence Hierarchies in Bioanalytical Method Validation

Attend any bioanalytical conference, and you’ll hear ‘validation’ referenced frequently. Rarely, however, is ‘in-study validation’ discussed with the same rigor

That gap is worth paying attention to.

When scientists say an assay is validated, or fully validated, they are almost universally referring to pre-study validation. In practice, ‘validation’ almost always refers to a prescribed series of pre-study experiments performed before any study samples are analyzed using surrogate materials under controlled conditions, as required by regulatory guidance. It is a defined exercise with clear deliverables, and once it’s complete, the assay is widely treated as validated, full stop.

But pre-study validation is conducted in the absence of real study samples. You are putting the assay through the prescribed paces that are available to you before you have access to the biology relevant to the study population. It tells you what the method is capable of under those specific conditions. It cannot tell you whether the assay is functioning properly when it encounters real patient samples.

That is what in-study validation informs. It is the assay proving, through actual study performance, that it caught what it needed to catch: that responses were detected, that the subpopulation with clinically meaningful reactivity was identified, that impact was found where it existed and not fabricated where it didn’t. In-study validation is the evidence that the assay i fit for its intended purpose. And it is, in fact, the more important of the two.

The Equation Gets Flipped

The bioanalytical field has largely inverted this hierarchy. Pre-study validation has become the headline measure of assay credibility, while in-study performance is treated as secondary, or assumed rather than demonstrated. That inversion has real consequences for how regulatory challenges are framed and how confidently sponsors can stand behind their data.

What This Looks Like in Practice

This is not a theoretical concern; it is a scenario that has played out across the industry, and immunogenicity testing is where it tends to surface most visibly.

Consider a program in late-phase development where the ADA assay has performed exactly as it should. Responses have been detected across the patient population. Within that data set, the subpopulation with clinically meaningful reactivity has been identified. The immunogenicity findings correlate with drug exposure and clinical outcomes. The data tells a clear, complete story.

Then a regulatory question arises. Pre-study sensitivity, established with a positive control, was 200 ng/mL rather than the expected 100 ng/mL, and a handful of samples had more drug on board than was demonstrated in drug tolerance. The ask is to redevelop the assay.

Scientifically, this is a difficult position to accept. The assay caught everything that was clinically relevant. It identified the patients with responses large enough to matter and measured additional lower-level responses that did not have a clinical impact. The in-study data make clear that the method did exactly what it needed to do. But because the pre-study parameters performed with an artificial control, didn’t align with the guidance, that evidence was set aside.

What makes this particularly difficult to reconcile is that a more sensitive and drug-tolerant assay would not change the study’s conclusions. The subpopulation with clinically relevant responses remains the same. A redeveloped assay would capture additional low-level responses, none of which carry clinical significance and all of which the original assay had already appropriately contextualized. No additional scientific value is provided. Time, resources, and development costs have been spent arriving at the same interpretation.

The Positive Control Is Not the Patient

Much of what pre-study validation characterizes is not an intrinsic property of the assay, it is a property of the positive control used to establish it. Sensitivity and drug tolerance are defined relative to a surrogate reagent that approximates the biological signal of interest, but that reagent is not the patient’s immune response. Pre-study performance parameters, by definition, characterize how the assay responds to a stand-in.

This distinction has a direct practical consequence. When pre-study performance is challenged, the path of least resistance is often not redevelopment but positive control substitution. Swapping in a PC that yields more favorable pre-study numbers will shift sensitivity and drug tolerance accordingly, while every actual study sample produces identical results. The assay’s performance against patient-derived antibodies has not changed in any meaningful way, but the regulatory expectation has been satisfied.

In cases where substituting a surrogate reagent resolves the challenge without altering a single study result, the question worth asking is what the pre-study parameter was actually measuring in the first place.

In-Study Validation Is the Be-All End-All

The method is either valid for its intended purpose, or it is not. There are no tiers of validation, no consolation prize for methods that cleared the pre-study hurdles. What determines fitness for purpose is whether the assay can answer the question being asked, with sufficient reliability, in the context in which it is being used. And the only data that can confirm that are the in-study data.

Pre-study validation is necessary. It is the foundation on which in-study confidence is built, and it is what guidance requires before study initiation for good reason. But it is a prediction, a well-designed, rigorous prediction, but a prediction, nonetheless. In-study validation is the process of testing a prediction against reality.

The field would be better served by treating it that way: not as an afterthought to pre-study compliance, but as the definitive measure of whether the assay worked.

Kayla J. Spivey

Kayla Spivey