Should be called "Napoli Koolaid",, straight from the foothills of Mt Vesuvius...
Same Koolaid Flavor, different Id.,
As an investor I throw caution to the wind and as tenuous as it may seem Zalicus 160 IS a prime example of bias. I remember five different posters elaborating on some wonderful Monte Carlo methods to derive as many different random statistical events as they could muster and for some reason the balance always tipped in favor of a positive outcome. So what happened to Z-160? Well, it was stellar for quite some time and and all the Phase Gate and Peer Reviews had nothing to publish but excellent prospects for the drug to go NDA. Studies were pushed out and board bloggers took the delays as an empirical inference that all must be good. Sorry, it didn't turn out to be what they expected. The drug did not meet the requirements and was dropped during that 2 point Phase study.
Now we know that MM-398 has made it to Phase III, but mind you this is the most difficult Phase to pass. Most in part due to the patient enrollment population and in MM's case matrix of cohorts. Then add to that the possibility of a snafu such as was seen with Peregrine Pharma bavitmuxitab's controls mix ups which caused a massive delay in Phase gate information being released. So, to put it in perspective, let's be honest with each other and say "so far it looks good, and MM-398 sure looks like it has a strong chance of making t to an NDA," but all this Napoli analysis really is questionable since we all can not see one's work, read it, understand it, judge it, and use that information,,,disseminating or not to bolster our feeling that the percentile for success is statistically more significant than not. I mean guys, guys, guys let's chill out....I think you scare people away with too much stat propoganda,, especially when we start creating mongers and avatars which declare oneself as The Study.
All napoli and others, including me, are trying to do is handicap the odds of the trial. By digging into the weeds a bit and setting up a system where you can adjust the important variables----in this case control arm survival and overall enrollment rate----you can get a decent 'feel' for what is going on and the likelihood of success.
Of course there is no guarantee, and yes there is always a possibility of a left field event. We're not doing this for the first time and we all know of the examples you mention. Most failures I have seen have been because of bad data assumptions and / or unknown variables. But models often do work when the trial design is straightforward and the endpoint are clear and the data is discernible. This trial is very basic in it's conception and nothing is easier to measure than death, and the company has been relatively transparent about enrollment.
So I think napoli's modeling system is excellent, and we are all free to agree or disagree with the assumptions he made. The only thing that REALLY counts is the downside scenario, and that's where I personally focus. I am comfortable that with survival at 5 months or less in the control arm the trial should succeed even in a very pessimistic enrollment model. That gives me comfort, as the TREATMENT arm of the MM398 trial had a mean survival of 5.2 months, for heavens sake, and in previous trials with 5 Fluorouracil in refractory pancreatic cancer with patients with the same 'functional status' (health) the survival was 3 months.
The FDA rules state that the benefit has to of "significant value". Is a two month extension in a percentile of the arm in questioned considered "significant" based on the population who actually extended the arm? Do you we know how many clinical cohorts are squeing the sigma? Like I said, it's a very subjective area. I do like math and realize many different models can agree on a range, but I always question the means of achieving it, because even in its simplest form there is always one variable that we overlook and hits us over the head later. How do I know this? That's my job.