In this video clip from "The Pharma & Biotech Show," recorded on Feb. 9, Dr. Frank David, author of The Pharmagellan Guide to Analyzing Biotech Clinical Trials, discusses how investors and researchers should evaluate the results of a clinical study after the trial was negative -- and what to do if it was positive.
10 stocks we like better than Walmart They just revealed what they believe are the ten best stocks for investors to buy right now... and Walmart wasn't one of them! That's right -- they think these 10 stocks are even better buys. Stock Advisor returns as of 2/14/21
When our award-winning analyst team has an investing tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*
10 stocks we like better than Walmart
They just revealed what they believe are the ten best stocks for investors to buy right now... and Walmart wasn't one of them! That's right -- they think these 10 stocks are even better buys.
Stock Advisor returns as of 2/14/21
Frank David: There's a pair of New England Journal pieces. One of them is called "The Trial Was Positive, Now What?" The other was "The Trial Was Negative, Now What?" They're both instructive. In one case you say, the trial was positive, but maybe you should still be a little bit cautious in terms of your interpretation. The other is what you're talking about, which is, the trial was negative, are there reasons to be still hopeful? I think some of those reasons would include, for example, some demonstration of pharmacologic activity, target engagement, etc. Are there other ancillary pieces of data that came out of the study where yes, it failed, but the drug did do what it was supposed to do. Then you could ask, maybe that mechanism is not true. It isn't important. Or maybe the drug does what it's supposed to do, but maybe it just wasn't given enough of or wasn't given for long enough, etc.
I think looking at the study size in terms of the overall powering is always worth thinking about. Some of these studies are designed with very aggressive estimates of what the minimum amount of difference the study is designed to detect. That's done because the larger that difference is that you're powered to detect, the smaller the study. It's cheaper and faster to run the study that's a little bit leaner. But you do miss a chance sometimes to detect a difference, which from a medical, clinical, and also from a regulatory point of view, could have been significant, but the study just wasn't large enough. I'd say that would be another scenario in which you'd want to take a second look at a nominally negative study.