Guest guest Posted November 7, 1999 Report Share Posted November 7, 1999 Dear Colleagues, Thanks to Slater's recommendation to look at the BMJ article on osteopathy, I read the proceeding letters to the editor, including one entitled "Chiropractic rot" by an ill-informed MD named Bramwell-Wesley. I quickly dashed off a response, and I was just informed that it was posted at the BMJ site <http://www.bmj.com/cgi/content/full/319/7218/1176>-- so score one for OregonDCs, and zero for snotty British medical practitioners who feel that they are somehow entitled to all patient care, no matter how poorly they handle some conditions. I'll be a lot more impressed with these types of arguments as soon as effective alternatives are offered to what DCs do; alternatives that don't involve longterm use of medications that have a deleterious and potentially disastrous effect on the kidneys, liver, GI system, and eventually, the affected joints themselves. That brings me to another new paper in the NEJM on osteopathy versus standard medical treatment, A Comparison of Osteopathic Spinal Manipulation with Standard Care for Patients with Low Back Pain & The New England Journal of Medicine" -- November 4, 1999 -- Vol. 341, No. 19. I have not read the full text of the paper, but essentially, the authors reported no statistically significant differences between osteopathic and traditional medical treatment of chronic LBP when measuring pain and functional outcome. They did report less use of meds for the oseopathic treatment group. They also reported that patient satisfaction was virtually the same in both groups, greater than 90%. This last bit should set off alarms in everybody's head. Why do people go to DCs instead of MDs for LBP? In part, because they are extremely *dissatisfied* with the care that they are receiving. If you think back to any comparison study of staidfaction with chiropractic vs. medical care for LBP, patient satisfaction typically ranges from 20-40% for the medical group, and 70-90% for the chiropractic group. It is quite obvious, then, that the "standard" medical treatment in this study was not standard at all, and was far superior to care that is generally seen. This problem severely affects the generalizability of the study results, and suggests manipulation of the study conditions by the authors. It is not that infrequent that researchers monkey around with study design to get their desired results. Unfortunately, most manuscript reviewers don't understand enough about statistics and research design to spot some of these potentially fatal study flaws. A perfect case in point is the study that came out in the NEJM about a year and a half ago, from Cherkin et al, comparing chiropracic with PT with the use of an exercise booklet. The authors reported that there were no differences between the groups, so why should anyone spend any money on either chiropractic or PT? My colleague Ann Rossignol and I critiqued this paper and the review will be published in JMPT this year. I have attached the file in Word format for any of you who want to get a jump on the JMPT. The essence of the critique, however, was that the study numbers were too low, and the measurement system too coarse to tell the difference between the groups. The easiest way to think of this error is to imagine a 100 yard dash that was to be timed with a stopwatch that only had a minute hand. If everyone arrived at the finish line before the first minute elapsed (as would be expected), then it was a giant tie between all of the competitors, no matter the reality of who came in first. In some measurements in the Cherkin et al paper, there was more than 50% difference between the groups, yet statistically, the groups appeared the same. This paper should never have been published, yet instead it made headlines across the country. The best way to fight bad research is to become informed as to the weaknesses of such research. The next step is to start funding our own *real* research. I believe that the FCER has funded more than enough studies of the intertester reliability of palpation; how about some money for studies that can actully demonstrate what it is that chiropractors do best -- fix people. I am more than happy to design and run clinical trials of chiropractic -- ones that will get published in NEJM and JAMA. Why isn't the profession directing more funds to such research? Sorry for the length of the pontification. D Freeman Attachment: vcard [not shown] Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.