Have not had a chance to check out the Board for a week. Have wondered throughout the week if I am missing something with this 40% drop. Don't think I am.
First, the drug works at the 25 mg dose. Case closed. Absent blatant fraud, you simply don't get a p value of 0.001 in a phase 3 study unless the drug works (to be specific, there is a 99.9 percent chance that the drug works). Period, end of story (unless of course the study was conducted fraudulently). I mean seriously, hasn't Feurstein taken statistics? There is simply no argument with that p value that the drug does anything other than work exceptionally well. And, insofar as the Shrekerli hypothesis was based on the drug not working over the length of a 3-month study, that hypothesis is now dead.
OK, so that's all the world of reality and basic statistics. But we also must address the world of FDA, where the demonstration of efficacy (which, again, is not in doubt with a p value of .001) must often be replicated in a second study. So what happened then in the second study, where the study was only shown to prove efficacy with about 98 percent certainty, and does it matter? A couple things on that. Preliminarily, as hijacked points out, this sort of conduct in relation to a data lock is not at all unusual. That's a true fact. So even if data lock was shifted to hit a primary endpoint that wouldn't otherwise have been hit (which some of our friends have stated to be a proven fact), that is not unusual as long as SOPs were followed, which they almost certainly were. Second, it should be noted that FDA typically views a p value of .05 or better as evidence of statistically significant efficacy. The endpoint was defined in Kodiak as .025, which is quite aggressive. In any case, there is certainly no regulatory requirement that 2 studies each hit .025 in order to obtain product approval. IF the database lock issue resulted in a dip from .025, and if FDA has some concerns about that, then in resolving that issue the chief thing FDA will look at is well, how did the other study do? In other words, is this a real issue of efficacy, or are we worrying ourselves over nothing here? My own view on these p values -- which are ridiculously excellent in one case and quite good in the other -- is that as long as these trials were otherwise conducted with integrity, there is a very miniscule chance -- and no more than that -- that FDA declines to approve the drug over this data lock issue with overall efficacy data as compelling as these.
NOTE: Some of our old friends on the board are now positing the drug now has zero chance of approval on efficacy grounds, with zero evidence for their assertions. Has anyone pointed to a comparable instance in which FDA declined approval? In other (meaning one pivotal study shows extroardinary efficacy, and the other shows a p value well below .05 with an issue such as this?). No, and they won't.
Second, as regards safety. The cv incidents were trending toward more prevalent in the control arms according to HR (without hitting clinical significance). That is very good news. Does it put safety to rest? No, of course not. The 52-week study will do that one way or the other. Assume cv issues still trend lower in the control arms of that study. Will FDA then say guys, we know you are trending lower on CV issues in a well-controlled study, but we want a specific CV outcomes study with 10,000 patients? Less than 5 percent chance of FDA taking that position in my view. That is of course my humble opinion only.
In short we hear from a number of once-sage voices on this board that the drug is now dead, zero shot of approval without new studies, etc. The probabilities in my view -- at least on the current state of the record -- favor the opposite view. Of course there is uncertainty, but if you agree with my thinking then uncertainty has been overpriced; if you disagree the short away.
DCXavier, Hoyas already told you. Here's a quote from the NYTimes article:
'Different studies of the placebo effect report wildly different results. One survey of 117 trials of two ulcer drugs found that, depending on the trial, patients in the placebo group had anywhere from zero to a 100 percent recovery rate.
The drugs also varied in their effectiveness from one trial to the next; sometimes patients on the placebo did better than those on the drug. Intriguingly, the results varied from country to country, with Brazilians showing no placebo effect and Germans having a strong one. Why? No one knows, but it doesn’t appear to be because of anything inherently German: trials of drugs for hypertension found a weaker placebo effect in Germany than in other countries.
The problem is that humans are not machines, and emotions are not abstractions. Hope and expectation, anxiety and fear, trust and suspicion — these cause physiological changes in the brain that can interact with drugs, changing their effects.
This is even true for a drug like morphine. Yes, it’s a powerful painkiller. But it’s far more powerful if a doctor marches in, tells you he’s going to give you morphine, and injects you, than it is if it is administered secretly by a hidden machine.'
I read that earlier. Which ulcer medicine? What were the studies measuring? How many people? Phase 1/2/3? Show us the data, man!
The placebo effect in painkillers is real. Many painkillers have failed efficacy, and that is a real risk for NKTR-181/192/171. The body produces natural endorphins, you probably start cranking them out the minute you sit down in a dental chair. Extended use of opioid drugs reduces the body's ability to create them naturally, that's why efficacy tests are given to opioid-naive subjects.
From the AZ website:
KODIAC-04 and -05 are both multicenter, randomized, double-blind, placebo-controlled pivotal trials of 12 weeks duration evaluating 12.5 mg and 25 mg naloxegol administered once-daily. The primary endpoint in both trials was percentage of OIC responders versus placebo over 12 weeks of treatment where a responder was defined as having at least three Spontaneous Bowel Movements (SBM) per week, with at least one SBM per week increase over baseline, for at least nine out of 12 weeks, and at least three out of the last four weeks. Under the design of both trials, statistical significance for the primary endpoint would be achieved if at least one of the two naloxegol doses had a p-value
Sentiment: Strong Buy
I would actually like to see the details of the PK and distribution data for the various dosage forms of Relistor. The drug is sub optimal nobody can argue that (you don't go with SC first time out otherwise). Does it accumulate anywhere, are there a subset of patients that can N-demethylate aportion of the drug, etc.. "Fixing" things via formulation can work, but adds one more layer of variables and potential issues. The SLXP call stated their trial was underpowered for what the FDA wanted, but what the FDA was actually looking for was not at all clear. If the larger AZ trial data looks good, and we have every suggestion it does the scenrios I see range from: outright full approval, conditionsl approval with continued examination as Hijacked suggested expanding and/or continuing current long term safety study (not bad at all), to some delay while additional data is collected. An unknown "what does the FDA want" has been added to the equation. Claiming to know the mind of FDA is foolish, but there has been an overreaction no doubt largely due to traders taking advantage (the P value on that is 0.001).
I can't know how long this lasts or how long this lasts or how low we go, but I have some cash ready and have started adding in portions.
"It should be recognized, however, that p-values are simply measures of probability over random chance and have nothing to do with safety or specific efficacy . . . So of course Naloxegol works."
Marketmaker -- that is not the case. A p value of .001 in a study with this trial design does indeed signal, almost irrefutably, that the drug is efficacious and clinically quite meaningful. With regard to "of course Naloxegol works," I guess you weren't around when the Shrekerli hypothesis was put forward suggesting the drug does not work over 3 months, causing a 20 percent drop in stock price. That hypothesis has now been laid to rest.
DCX -- you are a bit all over the place. You now state the problem with these efficacy data is not lack of efficacy but rather "tremendous inconsistency." But variations of this sort from one trial to another are common. Placebo effect is notoriously variable from one study to the next. FDA is going to dig into these data and try to #$%$ what they really mean and whether they signal efficacy with a high degree of confidence, or whether they instead signal real statistical concerns around efficacy. There is simply no question that the answer is the former -- unless there were major problems with trial integrity. Again, there is no regulatory requirement I am aware of for 2 studies to hit .025 in order for a drug to be approved. Kindly refer to the regs if you think I am wrong about that.
As for the CV concern, I have listened to the Relistor calls, and my own takeaway was that FDA wants a well-controlled study. The Relistor study was not "well-controlled." The Nektar study, on the other hand, is indeed "well controlled." Is it possible FDA will go beyond that and say that none of the studies planned by any of these sponsers (all in the thousand patient range, give or take a few hundred) are big enough, even though FDA itself provided input on required study size? Sure, anything is possible. Is it possible a 10,000 patient, 5-year cv outcomes trial is required that kills this class of medicines altogether? Sure, anything is possible. But let me put the question to you: assuming the safety data come back solid and are trending toward better CV event profile on the Naloxegol arm, which way are you betting? Do you think it more likely that FDA says no-go at that point, or are you betting that they say either approved or else approved with monitoring responsibilities imposed? My guess is you are betting on one of the latter, and your posts stating that there is zero chance of same are simply meant to be provocative.
"In KODIAC-04, the P value for the 25 milligram dose was equal to 0.001 and in KODIAC-05 the P value for the 25 milligram dose was equal to 0.021"
Always good to put the best foot forward, but in the second trial it barely exceeded the target p-value of .025. It should be recognized, however, that p-values are simply measures of probabilty over random chance and have nothing to do with safety or specific efficacy. And of course Naloxone works. It has been used as an emergency room treatment for drug od for half a century and orally for OIC for just as long. So the pegged version also works and the question is does prolonged exposure to Naloxegol cause CV problems? Not the end of story
Good old Marketmaker!
Either saying the obvious (Naloxone works) or getting things hopelessly wrong:
"p-values are simply measures of probability over random chance and have nothing to do with safety or specific efficacy"!
It's syntax, Jim, but not as we know it... What the heck does s/he mean?
The problem with the efficacy data is that it is tremendously inconsistent. P-values of 0.015 and 0.001 in KODIAC-04 vs. 0.202 and 0.021 in KODIAC-05. That's a huge swing, the 12.5 mg dose was more effective in KODIAC-04 than the 25 mg dose in KODIAC-05! The FDA is going to question how well the trials were controlled. There is no doubt that AZN is digging into the data. My opinion is that KODIAC-04 is closer to the truth, and KODIAC-05 is more likely to have irregularities if they exist. KODIAC-05 had far more second world test sites than KODIAC-04.
The efficacy data for patients with cancer pain has not yet been released, although the study also completed in September. This study also had a large number of second world test sites.
The FDA's concern about mu-opioid antagonists isn't clear except it appears to be due to possible low level withdrawal. There were only three patients out of 1000 in the single arm one year Relistor safety trial that had CV events attributable to Relistor. Two were returned to the drug. Yet the FDA won't budge. That's not very many AE's. It seems to me it may not be the raw number of events. It might be about physiological changes that could lead to CV problems in the future. Heart palpitations and increased blood pressure are common during opioid withdrawal, I wonder if these symptoms (or others) were seen in Relistor patients. But that's just my guess, until either a drug company or the FDA provides color on this, we are all flying blind.
It's not 'tremendously inconsistent'. One study showed a 97.9% probability of effectiveness, the other showed a 99.9% probability of effectiveness. Is that 'tremendously inconsistent'? No. People are not robots and phase 3 medical trials are not conducted in laboratories. It is a given that there will be differences between studies. If there was no difference in results, now that would be suspicious. Because differences are a given, that is why a bar is set - so that we can see with what degree of confidence we can rely on findings of effectiveness. This bar was set unusually high, as Hoyas pointed out above, yet the drug at the target dose beat it in both studies. Imagine if the bar had been set at 95%, as most studies are, then we would be talking about both arms approaching 100% likelihood of effectiveness.
Interesting take from a point of view detailing facts, scenarios, values and interpretations. I on the other hand came to similar a simar conclusion based solely on backstreet facts and face value of what I read, what I see and how I interpret the current movements of active management. Your CEO is not acting in a fashion similar to the normal gag counsel puts on these guys in the face of extreme possible sanctions. I have stated the choice words used in the initial press release. "In the meantime" "not to rely". Those are guided reflex words from counsel that ensure they immediately alerted any and all upon the first notification of the coding confusion. Remember that they specifically mentioned 'coding' and 'distribution' specific to a third party.
Fraud? I think not. This is the PR that lays it all out knowing the repercussions that will quickly attract law suit sharks. They wouldn't lay blame to third party if they didn't have a solid reason. Also remember they have a securities attorney with a great background on staff.
Enough said, without any scientific analysis :)
Sentiment: Strong Buy