Statistical Studies vs. Good Medicine

  • Share
  • Read Later
Pali Rao / iStockphoto

Man with back pain

You may have a bad back and not even know it. One of the commonest back problems physicians treat, called spinal stenosis, gives leg symptoms: pain, tingling, numbness and weakness down the legs, knees and thighs — and often without back pain. Few orthopedists can get through their week without seeing a patient with spinal stenosis — the problem is just so widespread. Worse, it doesn't really get better. With all the pills, therapies, shots, braces and exercises we prescribe, it's a rare case of spinal stenosis that we can make go away.

There is also, of course, the option of surgery. Some orthopedists operate on the spine, but most of us, though trained in spine surgery, do not. That job is left to the orthopedists and neurosurgeons who are designated as "spine surgeons," and who restrict their practices solely to this procedure. Spinal stenosis operations are among the most common, the most expensive and the most feared, with lots of complications — paralysis, blood clots, infection, leaking of spinal fluid, intractable pain.

But the dreary world of spinal stenosis was cheered up last February by a report in the New England Journal of Medicine that found that surgery for spinal stenosis works better than all of our other non-surgical treatments. The well-known authors of the paper included surgeons who have spent their careers doing the operation. The report claimed to be a first — an "evidence-based" study in which researchers did statistical analysis of how spinal stenosis patients fared with surgery versus non-surgical treatment.

It might seem odd that we have already spent billions of dollars and millions of painful and risky man-hours — as doctors and patients — on spinal stenosis surgery, all without "knowing" (until now) whether or not it "works." Welcome to the trendy world of "evidence-based" medicine.

Evidence-based research means, more or less, that someone has done statistical analyses on subjectively chosen "outcome parameters." There are some cases where the outcome, or endpoint, of a trial is pretty definitive and measurable — like, say, death. Other times, the outcome parameters can be as nebulous as "satisfaction" or "discomfort" — but in these cases, when numbers are assigned to subjective experience (e.g., "my discomfort level is now three out of 10") and plugged into an algebraic formula, they produce "rational" or "evidence-based" conclusions, which suddenly have the ring of scientific truth. As far as the evidence-based movement is concerned, heeding mere "expert opinion," from even the most successful clinician, would be akin to taking the word of a bearded man with a wand.

Health-insurance companies and hospital administrations are quite enthusiastic about evidence-based research. "Our evidence-based study says the test you ordered is not needed, so we're not paying for it" is the gist of their letters to patients and doctors. But has there been any improvement in our overall treatment of spinal stenosis since the revelation that the surgery actually works? Have more folks lined up for the operation since? Not that I can tell from my practice. When I tell stenosis patients that we now "know," in 2008, that the surgery works better than other treatments, they look at me like I'm crazy.

"Is the operation safe?"

"Well, we've been doing it for over 20 years."

"And you didn't know whether it worked until now?"

Of the few hundred patients I've told about that paper over the past five months, not one has chosen to undergo surgery. Real patients are scared of being cut open, of getting infections, not waking up, becoming paralyzed. They're scared of the pain. And they don't care about statistics. The smarter ones understand how complicated a decision it is to have an operation. What smart patients want is something beyond statistics — most call it judgment — as they decide between the pain they're living with now versus the risks of a procedure that can't guarantee a cure.

As far as I can tell, faith in expert opinion is how medical students, residents and even full-blown docs do much of their learning — mostly just trusting a few great doctors who teach. I know enough math to know that neither my colleagues nor I really know statistics. Not one orthopedist nor one neurosurgeon in my acquaintance really understands the math used in statistical papers. They learn by faith in somebody else's statistics, by trust in the reputation of an individual, or journal or university.

Those number-loving few who actually know the statistics — are they any good as surgeons? Maybe. It depends on whether their manual skills are as precisely honed as their math skills. I can only say that I will not be quizzing anyone on multivariate analysis when picking out the surgeon to do my back.

We must, at last, step where we cannot see. We are made like this — there is only so much an individual can truly understand. And there is always a point in decision-making at which reason fails (funny enough, this was actually proven, mathematically, by the 25-year-old logician Kurt Godel). Ultimately, without absolute evidence, decisions must still be made — the inescapable truth is that in the end, we all trust one expert or another.

There is no problem with statistical studies. They have been our trusty bloodhounds, tracking down new medical knowledge for decades. But "evidence-based research" is a mongrel, kept by business interests feigning patient advocacy, rarely, if ever, fed by the hand that helps when you're in pain.

Dr. Scott Haig is an Assistant Clinical Professor of Orthopedic Surgery at Columbia University College of Physicians and Surgeons. He has a private practice in the New York City area.