Guest guest Posted December 10, 2008 Report Share Posted December 10, 2008 Excellent question. Here are some reasons that come to mind: * phase II studies often have small study population (N), which increases the influence of chance on the outcomes (the more variable the natural clinical course of the condition, the larger N must be) - 5 flips of a coin could easily show all heads * patient selection bias - the tendency of investigators to offer studies to better functioning participants ... Also patients who are doing better can and will feel up to traveling to the best centers, will inquire and consider investigational therapies ... These people will (on average) do better than historical expectations no matter the therapy. * non-uniform assessment of response, such as uneven CT scan interval and biased intrepretations of imaging. * expectation bias - seeing what we expect to see, based on strong belief in a theory / investigator. * recall bias - a tendancy to remember the good outcomes, forgetting less desirable outcomes. * reporting bias - sponsor publishing the good outcomes, not publishing otherwise The remedy for these sources of bias is a prospective study design, randomization, and blinding. This method eliminates all of the above sources of study bias - which is defined as defects in study methods that lead away from the truth. A safeguard against the all-too-human tendency to see what we want to see. " Prospective " meaning you test going forward, instead of looking back (retrospective). It accounts for all the participants, as in: we will recruit 600 patients, 300 get this, 300 that. (Like calling your shot when playing billards, you state what the outcome will be before hand.) -With a prospective design ALL of the outcomes are accounted for, not just what is found looking back. It provides also a reliable denominator, ...if you have 150 complete responses and N is 300, you have a more reliable 50% CR rate.. Compare with an indvidual CR in private practice, which tells us nothing about the chances of others to do as well. Each arm of such studies are balanced by random selection, so you know the comparison is objective - most accurately predicts outcomes for others - within a margin of error expressed as p-value or confidence interval. For marketing approval, the outcomes are measured uniformly, with independent, blinded, third-party monitoring. Such scientific method rescues us from ourselves ... from theory-based medicine. Even trained physicians have been fooled by clinical observations, most recently by Hormone replacement treatment (HRT), which was shown to be not good for women when tested in a controlled study. Hope this helps. Karl Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.