Biopharma Board Governance
← All Articles

How Boards Should Evaluate Ambiguous Clinical Data

|Lawrence Fine

Most board decisions in biopharma are not made on the basis of clean results. The moments where the data clearly says "go" or clearly says "stop" are uncommon. Far more often, the board is presented with data that is ambiguous — results that could be interpreted as encouraging by an optimist and disappointing by a skeptic, with reasonable arguments on both sides.

These are the moments that define a board. Not the easy calls, but the ones where the evidence genuinely does not resolve the question.

The Nature of Ambiguity in Clinical Data

Before addressing how boards should respond to ambiguous data, it helps to understand why ambiguity is the norm rather than the exception.

Clinical trials are designed to answer specific statistical questions. But the data that comes back often raises as many questions as it answers. A trial might hit its primary endpoint but miss important secondary endpoints. It might show a statistically significant effect that falls short of what the market would consider clinically meaningful. It might demonstrate clear efficacy in a subgroup that was not pre-specified in the protocol.

Ambiguity also shows up in safety data. An adverse event signal might appear that is numerically small but clinically concerning. A drug might perform well against placebo but show an unfavorable comparison against standard of care. A dose-response curve might not behave the way the preclinical data predicted.

The critical thing for board members to understand is that ambiguity in clinical data is not a failure of trial design. It is an inherent feature of testing unproven compounds in complex biological systems. Boards that expect clean answers will consistently find themselves frustrated. Boards that develop a framework for evaluating ambiguous data will consistently make better decisions.

What the Board Is Actually Deciding

When ambiguous data arrives at the board level, the question is rarely a simple "do we continue or not." The decision space is more nuanced than that. The board might be choosing between continuing the current program as designed, modifying the trial design and continuing, pursuing a different indication with the same compound, seeking a partner to share the risk, or winding down the program and redirecting capital.

Each of these options has different capital requirements, different timelines, and different risk profiles. The board's job is not to read the data itself — that is what the scientific team is for — but to evaluate the decision landscape that the data creates.

This is a crucial distinction. A board member who tries to become a clinical data analyst is misusing their time and probably overstepping their governance role. A board member who asks management to lay out the decision tree that the data implies, and then stress-tests each branch, is doing exactly what they should be doing.

A Framework for the Gray Zone

Over the course of sitting through dozens of data readouts that fell into the gray zone, I have found that a structured approach prevents both the excessive optimism and the reflexive pessimism that ambiguous data tends to produce. There are several questions that the board should work through systematically.

What did we expect, and why?

Before any data readout, the board should have a clear understanding of what the company expected the results to show and what the basis for that expectation was. If the preclinical package predicted a 40% response rate and the interim data shows 28%, the board needs to understand whether the preclinical model historically overpredicts in this indication, whether the patient population enrolled matches what was expected, and whether the treatment duration was sufficient to see the full effect.

Without the context of what was expected and why, the board has no way to evaluate what the data actually means. This sounds obvious, but it is remarkable how often data is presented to a board without a clear discussion of the prior hypothesis.

What would change the interpretation?

Ambiguous data often looks different with additional information. The board should push management to articulate what data, if available, would resolve the ambiguity. Would a longer follow-up period clarify the trend? Would biomarker analysis identify a responsive subgroup? Would a dose optimization study address the question?

This question does two things. First, it reveals whether additional clarity is achievable at a reasonable cost and timeline. Second, it tests whether management has genuinely thought through the data or whether they are presenting an optimistic interpretation and hoping the board will not probe further.

What do the skeptics say?

Every data package has a bull case and a bear case. In my experience, management almost always presents the bull case, and for understandable reasons — the CEO and CSO have spent years on this program and they believe in it. The board's job is to ensure that the bear case is heard.

This does not mean the board should appoint itself as the opposition. It means the board should explicitly ask management to present the strongest case against proceeding. What would a skeptical FDA reviewer say about this data? What would a potential partner's due diligence team flag? What would a short seller focus on if this were a public company?

If management cannot articulate the bear case, that is itself a warning sign. It suggests that the analysis has been filtered before it reached the boardroom.

What does this cost us to be wrong?

This is ultimately a capital allocation question, and it is where board members with financial backgrounds add the most value. If the board decides to continue and the data ultimately does not support the program, what is the cost? How many months of runway does it consume? What other programs are deprioritized? What is the opportunity cost?

Conversely, if the board decides to stop and the data would have matured favorably, what is the cost of abandoning a program that might have worked? In biopharma, both types of errors are real. Continuing too long with a program that will ultimately fail can burn through cash and destroy a company. Stopping too early can kill a program that would have succeeded with one more study.

The board should think about this in terms of capital at risk relative to the potential outcome. A $5 million additional investment to resolve ambiguity on a program with a $500 million peak sales opportunity is a very different calculation than $5 million to resolve ambiguity on a program with a $50 million opportunity.

The Cognitive Traps

Boards navigating ambiguous data are vulnerable to several well-documented cognitive biases. Being aware of them does not make a board immune, but it gives directors the language to call them out when they appear.

Sunk cost reasoning is the most common. The argument goes: "We have already invested $30 million in this program. We cannot walk away now." This is exactly backwards. The question should always be whether the next dollar invested is justified by the prospective return, regardless of what has already been spent.

Anchoring to the original thesis is equally dangerous. When a program was started, there was a hypothesis about how it would perform. When the data comes back differently, there is a strong psychological pull to interpret the data in a way that preserves the original thesis rather than updating the thesis to match the data.

False precision is particularly insidious in clinical data. A management team that presents a response rate of 34.7% is communicating false confidence in a number that, with a different randomization, might have been 28% or 41%. Boards should push for confidence intervals and ranges, not point estimates.

Consensus bias is the tendency for the board to seek unanimity, especially under stress. Ambiguous data creates discomfort, and there is a natural desire to resolve that discomfort by reaching agreement quickly. Boards should be suspicious of rapid consensus on ambiguous data. If the data is genuinely ambiguous, quick agreement likely means the dissenting views are being suppressed rather than that the answer is obvious.

The Role of Independent Expertise

For data packages that fall into the gray zone, the board should consider whether independent scientific expertise is needed. This is not about questioning management's competence. It is about acknowledging that management has an inherent conflict of interest when evaluating data on their own programs, and that an independent perspective can add genuine value.

An independent clinical advisor can help the board understand how the data compares to other programs in the same therapeutic area, whether the endpoints measured are the ones that regulators and payers actually care about, and whether the trial design was adequate to detect the effect the company is looking for.

The decision to bring in independent expertise should not be treated as a vote of no confidence in the management team. Boards that position it this way create a dynamic where management resists external review. Instead, it should be framed as standard governance practice for material decisions — which is exactly what it is.

What Good Looks Like

The best board data reviews I have observed share a few characteristics. The data is presented with full context, including what was expected and how the results compare to competitive programs. Management presents both the optimistic and cautious interpretations. The discussion focuses on the decision tree rather than on a single recommendation. The financial implications of each path are quantified. And the board reaches a decision with a clear rationale that can be documented, even if that decision is to wait for additional data before committing.

The worst data reviews I have witnessed are the ones where management presents a single narrative, the board asks a few polite questions, and the meeting ends with a vague sense that the program is continuing. In these cases, the board has not actually made a decision — it has simply failed to make one, which defaults to the status quo.

A Practical Recommendation

If there is one tactical recommendation I would offer to any board member facing ambiguous clinical data, it is this: before the data readout meeting, ask management to prepare a written document that includes the pre-specified success criteria for the study, the actual results against those criteria, the three most optimistic and three most cautious interpretations of the data, the cost and timeline for each possible path forward, and a clear recommendation with stated assumptions.

Having this in writing before the meeting serves several purposes. It forces management to commit to a position rather than reading the room and adjusting. It ensures that the board has time to reflect on the data rather than reacting in real time. And it creates a governance record that documents the basis for whatever decision is made.

Clinical ambiguity is not something boards can eliminate. It is something boards can learn to navigate. The boards that do it well are the ones that have built a framework for the gray zone before the data arrives, rather than trying to build one in the moment when everyone in the room is anxious about the answer.

Continue Reading

Stay Informed

Governance insights delivered to your inbox. No spam.