Purpose
Over the years great steps have been taken in reporting what really matters in clinical research. A clinical researcher might report: "in my own experience treatment X does not do well for condition Y". The use of a P value cut-off point of 0.05 was introduced by R.A. Fisher; this led to study results being described as either statistically significant or non-significant. Although this ''p''-value objectified research outcome, using it as a rigid cut off point can have potentially serious consequences: (i) clinically important differences observed in studies might be statistically non-significant ( a type II error, or false negative result) and therefore be unfairly ignored; this often is a result of having a small number of subjects studied; (ii) even the smallest difference in measurements can be proved statistically significant by increasing the number of subjects in a study. Such a small difference could be irrelevant (i.e., of no clinical importance) to patients or clinicians. Thus, statistical significance does not necessarily imply clinical importance. Over the years clinicians and researchers have moved away from physical and radiological endpoints towards patient-reported outcomes. However, using patient-reported outcomes does not solve the problem of small differences being statistically significant but possibly clinically irrelevant. In order to study clinical importance, the concept of minimal clinically important difference (MCID) was proposed by Jaeschke et al. in 1989. MCID is the smallest change in an outcome that a patient would identify as important. MCID therefore offers a threshold above which outcome is experienced as relevant by the patient; this avoids the problem of mere statistical significance. Schunemann and Guyatt recommended minimally important difference (MID) to remove the "focus on 'clinical' interpretations" (2005, p. 594).Methods of determining the MID
There are several techniques to calculate the MID. They fall into three categories: distribution-based methods, anchor-based methods and the Delphi method.Distribution-based methods
These techniques are derived from statistical measures of spread of data: the standard deviation, the standard error of measurement and theAnchor based
The anchor based method compares changes in scores with an "anchor" as a reference. An anchor establishes if the patient is better after treatment compared to baseline according to the patient's own experience. A popular anchoring method is to ask the patient at a specific point during treatment: ‘‘Do you feel that the treatment improved things for you?’’. Answers to anchor questions could vary from a simple "yes" or "no", to ranked options, e.g., "much better", "slightly better", "about the same", "somewhat worse" and "much worse". Differences between those average scale score for who answered "better" and those who answered "about the same" create the benchmark for the anchor method. An interesting approach to the anchor based method is establishment of an anchor before treatment. The patient is asked what minimal outcome would be necessary to undergo the proposed treatment. This method allows for more personal variation, as one patient might require more pain relief, where another strives towards more functional improvement. Different anchor questions and a different number of possible answers have been proposed. Currently there is no consensus on the one right question nor on the best answers.Delphi method
The Delphi method relies on a panel of experts who reach consensus regarding the MID. The expert panel gets information about the results of a trial. They review it separately and provide their best estimate of the MID. Their responses are averaged, and this summary is sent back with an invitation to revise their estimates. This process is continued until consensus is achieved.Shortcomings
The anchor based method is not suitable for conditions where most patients will improve and few remain unchanged. High post treatment satisfaction results in insufficient discriminative ability for calculation of a MID. A possible solution to this problem is a variation on the calculation of a 'substantial clinical benefit' score. This calculation is not based on the patients that improve vs. that do not, but on the patients that improve and those who improve a lot. MID calculation is of limited additional value for treatments that show effects only in the long run, e.g. tightly regulated blood glucose in the case ofCaveats
The MID varies according to diseases and outcome instruments, but it does not depend on treatment methods. Therefore, two different treatments for a similar disease can be compared using the same MID if the outcome measurement instrument is the same. Also MID may differ depending on baseline level Hays, R. D., & Woolley, J. M. (2000). The concept of clinically meaningful difference in health-related quality-of-life research: How meaningful is it? PharmacoEconomics, 18, 419-423. and it seems to differ over time after treatment for the same disease.See also
*References
{{reflistExternal links