At this year's national IHI conference, Don Berwick challenged one of the basic tenets of evidence based medicine. In his speech, he argued that even though a large, randomly assigned, controlled trial showed no benefit to rapid response teams, we should implement this improvement anyway. Could this be? One of the worlds biggest proponents of evidence based medicine is telling his disciples to ignore the gold standard study? He knows the history of well meaning knowledgeable physicians who were reluctant to learn that their understanding of physiology and their personal experience can be extremely misleading. There are many example of the standard of care being reversed after a good national study: estrogen replacement therapy, antiarrhythmics for asymptomatic arrhythmia, bone marrow replacement for breast cancer, surgery for certain types of back pain, and many more.
In Don's keynote speech, he gave reasons why rapid response teams were different, and why the classical randomized controlled trial is different. When I first heard the speech , I liked what Don had to say. However, after it, I spoke with some people who strongly disagreed. I feel Don was right, and I predict he will look back with pride in having given such an important speech. However, it will promote a great deal of debate and dialogue that may lead him to wish he had worded things a little differently. So now, I will give my best shot at explaining why this long standing tenet of evidence based medicine should be challenged.
Historically, when a randomized controlled trial proved a therapy did not work, the counter evidence that the therapy worked was weak. There might have been anecdotal evidence, uncontrolled cross sectional studies, or physiologic reasons why the treatment "should work." Randomized controlled trials have their known weaknesses (i.e. patient populations or treatments not representative of general community condition), but clearly large well managed studies are superior to these other experimental methods.
What makes the case that Don presents different is that there is quite good statistically valid evidence that rapid response teams work in some organizations. If an organization has implemented rapid response teams and, by using statistical process controls (or other similar methods), observes a statistically significant improvement, how should these facilities interpret a national study? Should they stop a process that has been statistically shown to save lives in their organization? If a process has been shown to save lives in some organizations, shouldn't other organizations try to replicate this process?
But wait, what is this statistical process control (and other industrial methods), and can these methods really demonstrate effectiveness better than randomized controlled trials? The answer is yes. All methods have their strengths and weaknesses, and care should always be taken as to how one interprets data. Under some conditions, local improvements can be more valid and relevant than a national study. These alternative methods are what industry uses to improve, and they are far more cost effective. If Toyota used a randomized controlled trial to find the best way of making cars, it would have been out of business a long time ago. As a matter of fact, people from other industries that use Lean Thinking and Six Sigma might be surprised to learn that the medical establishment does not value what they consider their standard methodology. It is time for healthcare to broaden the range of statistically validated tools.
How can a process work at a local level and yet not be substantiated by a randomized controlled trial? There are a number of reasons, but the one I wish to focus on relates to issues of studying complex systems. In simple systems (such as testing a new pharmaceutical), the active ingredient is prepared in a standard formulation and administered in a controlled manner. In complex administrative systems, each individual organization is analogous to using different formulations and dispensing methods. A new policy for rapid response teams may not mean much if the organization does not have the infrastructure or culture to support the improvement. The supporting mechanisms are difficult to assess or control for. Effectively, in the right organizations, rapid response teams can make a big difference, but as it is being implemented broadly, it may not be having the desired effect. Clearly, this is an opportunity for further research. One could interpret the data as implying that rapid response teams should not be implemented until it is fully understood how to make them work. It can be interpreted to imply that organizations should implement rapid response teams, but they should not assume it will work. Instead, they should adopt best practice methodology to figure out how to make it work. In this case, I would have a bias toward action since there may never be enough data to clearly specify how each organization should adopt this change. This type of lesson likely applies to many complex organizational system changes.
Why is this topic so important? Because the pace of healthcare quality improvement can be so much faster if we learn to accept statistically validated methodologies used by other industries. Right now, these methods are not accepted or understood by many in healthcare or by leading medical journal editors. These new methods will never replace the randomized controlled trial, but they can greatly accelerate change especially in those areas where randomized controlled trials are too expensive or not feasible, particularly in complex organizations changes.