Advertisement

Comparative effectiveness research requires competitive effectiveness

      Often, interventions to be evaluated should be compared with other intervention(s) or usual care rather than with inactive (placebo) interventions or non-intervention. This is especially the case when there is already available care that is accepted to be effective and safe, and therefore cannot be omitted from the comparison for scientific, clinical and ethical reasons. In such cases, we speak about head-to-head comparisons in the context of comparative effectiveness research [
      • Sox H.C.
      • Helfand M.
      • Grimshaw J.
      • Dickersin K.
      PLoS Medicine Editors
      Comparative effectiveness research: challenges for medical journals.
      ]. These terms underline that, in the interest of optimizing patient care and research waste control, clinical effectiveness research must focus on comparing the best possible options, and stop only documenting interventions to be better than fake interventions without having been tested against active intervention(s) already known to be efficacious.
      In order to achieve its purpose, comparative effectiveness research should also take care to make sure the interventions are truly comparable, e.g., with regard to optimal dosing, monitoring, and equivalent experience of the involved clinicians. If that is not the case, the comparison may be biased in favour of the intervention that is better applied, and therefore not only unfair but, more importantly, clinically useless and potentially harmful. Reasons for such biased comparison can be related to researchers, the state of expertise and experience in the study setting, and funding bias.
      Researchers leading or performing a study may, for instance, be academically focussed on a newly developed intervention and the related hypothesis. This could result in more motivation and thoroughness in applying the new intervention as compared with the reference intervention, even if other methodology aspects are optimal (e.g., randomization and blinding). As to setting, the available expertise and skills may be greater for interventions that have been locally developed and studied than for the comparator. Likewise, funding is often related to one specific intervention (e.g., drug), that then receives extra attention in being optimally applied, while the comparator is just routinely implemented. Also, the funded study protocol may be designed, whether or not on purpose, more from the perspective of the funding manufacturer than the perspective of non-funding manufacturers of comparator interventions. Such funding bias has been linked to choosing suboptimal comparators, inappropriate dosages for comparison drugs, suboptimal assessment of adverse effects, and other design features that can enhance the likelihood of a favorable result for the funding company [
      • Gartlehner G.
      • Fleg A.
      Comparative effectiveness reviews and the impact of funding bias.
      ,
      • Flacco M.E.
      • Manzoli L.
      • Boccia S.
      • Capasso L.
      • Aleksovska K.
      • Rosso A.
      • et al.
      Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor.
      ].
      The concept of ‘competitive effectiveness’ research can help minimise such unfair comparisons [
      Report FIGON- Dutch Medicines Days 2010, Ethical aspects in Clinical Research.
      ,
      • Knottnerus A.
      • Govaert P.
      • Dinant G.-J.
      • Thijs C.
      GeBu te kort door de bocht over griepprik.
      ]. In competitive effectiveness research, full attention can be paid to enable each of the compared interventions to be applied in the best possible way. This makes the research a real competition, like a race in which all participants are prepared to perform at top level. In that situation, the winner wins because (s)he really is the best, not because of unfair gaming or deliberate competition distortion.
      In preparing a trial to meet these requirements, factors like those earlier described should be proactively addressed. Thus, in designing and performing a trial, striving for the best possible should ideally be the case for the available expertise and experience for each intervention. In this connection, it would be also advisable that in a comparison of different drugs for the same indication, all corresponding manufacturers were co-funding and supporting the study and be involved with preparing the design and the application of ‘their’ drug, in the context of an appropriate unbiased trial. This would make sure that all available support is invested in making an optimal and fair comparison, and to avoid later discussions and criticisms of an unbalanced comparison having been made. In such a context, transparency of and adequately dealing with conflicts of interest is especially important; this should be supervised and monitored by independent principal investigators.
      Having said all this, we realize that a completely fair and balanced scientific competition between interventions is more challenging when comparing non-drug and complex interventions than for comparing single pharmaceutical interventions. For example, with non-drug interventions it will be much more difficult to achieve and maintain blinding. Comparing complex strategies makes it difficult to attribute the effect to specific intervention ingredients of a clinical protocol. These methodological challenges in organizing a both valid and clinically useful competition are addressed in two highly interesting papers on the evaluation of therapeutic medical devices that are published in this issue.
      In the first article, Schnell-Indersta et al. review existing guidance on the methods for evaluation of the comparative effectiveness of therapeutic medical devices and develop recommendations for systematic reviews of comparative effectiveness of such devices as part of health technology assessments. Based on this review, the authors make ten recommendations, including a template for a logic model for therapeutic medical devices (TMDs), that summarizes factors that should be systematically considered. From their analysis they conclude that in planning and conducting a systematic review, more effort is required to define the intervention and the technical characteristics, and to identify and describe effect-modifying factors such as expertise and learning of users and providers of medical devices. The authors also make the plea that the quality of primary studies in this field must be substantially improved. In a second article, Schnell-Indersta et al. follow-up on the latter conclusion by reviewing existing recommendations on study design, conduct, analysis, and reporting for primary studies of TMDs and interventional procedures. Their analysis shows that relevant contextual factors for TMD interventions should be considered during the selection of patients, providers, and centers, and in data collection and analysis. The authors also identified guidance for the analysis and quantification of learning curves. They conclude that better dissemination of the methodology of conducting primary research for TMDs is needed to support improvement of the evidence base for health technology assessments. In addition, they recommend that HTA should have early dialogues with manufacturers to improve the quality of primary studies, and emphasize the need for incentives that take regulatory requirements and market conditions into account.
      The work by Schnell-Indersta’s group underlines that in comparative effectiveness research contextual and effect-modifying factors - including expertise and learning of users and providers, and providers’ and patients’ preferences, should be appropriately addressed in order to achieve both internally and externally valid study results, thus optimally contributing to clinical practice. This can also help to optimize the compared interventions and strategies, in order to ensure a fair competition and valid outcome.
      The work of Siebenhofer c.s. et al. and Ladanie and co-authors also emphasizes the need for sufficiently comprehensive comparative effectiveness research. Siebenhofer's group demonstrates the importance of including routine care as comparator, as in their general-practice based review of cluster randomized trials on complex interventions, few studies showed superiority compared to routine care. And Ladanie’s team found in a systematic review that approved drug treatments do not do consistently better in trials than off-label treatments. So, in comparative effectiveness research off-label options should be considered for inclusion. All interventions, including off-label treatments, must be evaluated with the highest degree of scientific rigor.
      The added value of comprehensive comparative effectiveness for further research and clinical application can only be fully harvested by reproducible research practices. In this connection, we recommend the review of Page et al. who show that reproducible research practices are underused in systematic reviews of biomedical interventions. They insist on better strategies to facilitate appropriate data description, transparent reporting of methods and results, and sharing of data sets.
      Summarising the above, taking the principle of optimal and fair competition seriously may have consequences for the requirements that must be met by clinical research teams,in order to mobilize optimal expertise for all relevant interventions to be compared. In addition, this principle should lead to reconsidering the appropriateness of the practice of funding trials by one manufacturer when also products of other companies are part of the comparison. Given the need for well-balanced objectivity and impartiality of the study and optimizing validity, involvement of all relevant manufacturers would be better. Fair and valid competition can further be promoted by making sure that trials are led by independent academic parties having a solid publicly financed basis. This would follow the recommendation of Flacco et al. that consideration should be given to allowing the conduct of trials of comparative effectiveness and safety under the control of nonprofit entities. Regulatory agencies can play an important role in formulating the corresponding formal requirements to achieve this [
      • Flacco M.E.
      • Manzoli L.
      • Boccia S.
      • Capasso L.
      • Aleksovska K.
      • Rosso A.
      • et al.
      Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor.
      ]. Finally, as underpinned by the work of Schnell-Indersta et al., substantial involvement of the perspectives of clinical practitioners who just focus on patients’ interests, and of patients themselves, is paramount.

      References

        • Sox H.C.
        • Helfand M.
        • Grimshaw J.
        • Dickersin K.
        • PLoS Medicine Editors
        Comparative effectiveness research: challenges for medical journals.
        J Clin Epidemiol. 2010; 63: 862-864
        • Gartlehner G.
        • Fleg A.
        Comparative effectiveness reviews and the impact of funding bias.
        J Clin Epidemiol. 2010; 63: 589-590
        • Flacco M.E.
        • Manzoli L.
        • Boccia S.
        • Capasso L.
        • Aleksovska K.
        • Rosso A.
        • et al.
        Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor.
        J Clin Epidemiol. 2015; 68: 811-820
      1. Report FIGON- Dutch Medicines Days 2010, Ethical aspects in Clinical Research.
        Fiagnostiek. 2010; 3: 2-4
        • Knottnerus A.
        • Govaert P.
        • Dinant G.-J.
        • Thijs C.
        GeBu te kort door de bocht over griepprik.
        Medisch Contact. 2011; 66: 2962-2964