(Bloomberg Opinion) -- Nine years ago, the health-care world was atwitter at reports of a promising new way to control spiraling medical costs.
Atul Gawande, the respected surgeon and author, had written an article in the New Yorker publicizing the success of treating patients in Camden, New Jersey, who were driving up costs with constant, repeated visits to hospital emergency rooms. It seemed that close monitoring and counseling of these patients by a specially trained team of nurses, social workers and community health professionals could steer them away from making unnecessary hospital visits by giving them better and cheaper ways to take care of their medical problems.
Partly on the basis of Gawande’s article, the approach, dubbed "hot-spotting," caught the attention of health-care advocates and managers and attracted significant foundation funding.
There’s just one problem: On closer inspection by a team of researchers from the Massachusetts Institute of Technology, the breakthrough turned out to be a mirage. Hot-spotting didn't reduce hospital visits at all, and probably didn't save a dime (though it’s possible it helped the patients in other ways).
The reality check comes courtesy of a research method known as a "randomized control trial" used for a study released on Wednesday in the New England Journal of Medicine. Unlike the research cited by Gawande, which relied on observing patients before and after they went through the Camden program, the randomized trial assigned some people randomly to the Camden program and others to regular care.
The randomized approach has the major benefit of more clearly identifying causality, since the random assignment allows a clear comparison between those who received the treatment and other similar people who did not. The Camden experience provides a case-study showing the benefits of the technique, which were recognized by the awarding of the 2019 Nobel Prize in Economic Science to three economists for their work on randomized control trials.
The Gawande article on Camden highlighted a 40-percent drop in emergency-room visits after the first 36 patients were enrolled in the hot-spotting program, which was aimed at reducing their documented heavy use of hospital care. He concluded that, as a result, net savings would be "almost certainly, revolutionary."
To see whether the program actually caused such savings, in 2014, the leaders of the Camden program cooperated with social scientists Amy Finkelstein, Annetta Zhou, Sarah Taubman and Joseph Doyle of MIT and the National Bureau of Economic Research to study the impact through such a randomized control trial. The Camden leadership should be commended for welcoming a randomized trial, since many program advocates would not have been willing to subject their activities to this rigorous standard.
The MIT researchers identified 800 patients who had been hospitalized in Camden between 2014 and 2017. All had serious problems like substance abuse or impaired mobility and half lacked a high-school diploma. Some of the patients were selected at random for assignment to the Camden hot-spotting program. Others got no special care. If hot-spotting were truly effective, it would presumably have reduced hospitalization and its costs more substantially for the group that participated.
It didn't happen. The researchers found that the rate of re-hospitalization was 62 percent in both groups, with no statistically significant difference between them. In other words, the randomized results suggest that the program had no effect on readmissions.
Even more telling, the MIT researchers also found a 38-percent decline in hospital readmissions under the Camden program in the six months after enrollment — almost exactly the figure Gawande had cited for a similar analysis of the initial patients. What a simple before-and-after analysis like the one cited by Gawande missed, however, is that those who weren’t enrolled in the program experienced a similar decline in readmissions over the same period. The effect of the program on hospital admissions was thus a mirage, probably caused by statistical reversion to the mean in which exceptionally high cost episodes tend, on average, to be followed by lower-cost ones.
Randomized trials can be expensive and time-consuming, so they aren’t always practical. And in some cases, they are simply inappropriate (the most famous example being that a randomized trial on whether parachutes work is clearly a bad idea). But when policymakers and social scientists really want to know whether a program works the way they hope it does, the randomized trial remains the best tool — as the Camden experience underscores.
To contact the author of this story: Peter R. Orszag at firstname.lastname@example.org
To contact the editor responsible for this story: Jonathan Landman at email@example.com
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Peter R. Orszag is a Bloomberg Opinion columnist. He is the chief executive officer of financial advisory at Lazard. He was director of the Office of Management and Budget from 2009 to 2010, and director of the Congressional Budget Office from 2007 to 2008.
For more articles like this, please visit us at bloomberg.com/opinion
©2020 Bloomberg L.P.