A high-value, patient-centered healthcare system that is equitable, efficient and produces uniformly high health outcomes requires good evidence. Comparative effectiveness research (CER) provides a key piece of evidence by comparing the effectiveness of alternative medical treatments and thereby helping providers, payers and patients determine which courses of treatment are best.
Alarmingly, the majority of our care is not based on CER. In 2009, the Institute of Medicine estimated that more than half of the treatments delivered do not have clear evidence of effectiveness.1 Similarly, Clinical Evidence, a project of the British Medical Journal, found that little was known about the effectiveness of nearly 50 percent of 3,000 medical treatments that had been the subject of randomized controlled trials (RCTs).2
Increased funding for CER, along with getting study results into practice, could lead to more effective use of our healthcare dollars. More clarity about which treatments work best—and for which types of patients—could create potential for shifting money to those interventions and away from less effective treatments. But better funding for CER is not enough on its own—increasing both the adoption of CER in clinical practice and the dissemination of results is critical to reducing waste, slowing spending growth and improving outcomes.
Comparative effectiveness research answers questions about the effectiveness of alternative medical treatments and can take many forms, such as:
In addition to comparative effectiveness data obtained from trials, researchers are also using real world evidence (RWE) to compare the effects of different interventions. This data is generally collected in electronic health records (EHRs) and can offer insights on how treatments perform in different patients. For example, one such study on obesity used RWE to compare gastric bypass, sleeve gastrectomy and adjustable gastric band procedures, in addition to trial data.3
RWE may be a good complement to RCTs, especially because patients enrolled in RCTs are usually healthier and younger compared to the general patient population. RWE analyses using patient registries and administrative data can also be performed at low cost compared to data collected in clinical settings.4 Additionally, socioeconomically disadvantaged patients and racial and ethnic minorities are underrepresented in RCTs, whereas data collected from EHRs represents a more diverse set of patients.
Experts believe using RWE may help to close the “efficacy-effectiveness gap,” which reflects the difference between an intervention’s effects in RCTs and real-world practice.5 However, hurdles to using RWE in CER remain. Though EHR adoption has increased, lack of interoperability, or ability to exchange data with and use data from other systems, is preventing patient data from flowing across research and care settings.6
Our health system is steadily moving to incentivize quality and improve equity. Lack of clear treatment evidence undermines our ability to improve how our health system works—contributing to treatment variation, waste and a wide disparity in costs and outcomes across the country. Reliable CER is foundational to key activities designed to make our health system work better, such as:
In 2008, the U.S. devoted just 1 percent of healthcare spending to learning what works best, for whom and under what circumstances.10 By way of comparison, 10 percent of healthcare spending may be spending for overtreatment and low-value care.11
For nearly two decades, spending on CER has been increasing, but not enough to rein in wasteful healthcare spending. In 2003, the Medicare Prescription Drug, Improvement, and Modernization Act expanded the Agency for Healthcare Research Quality’s (AHRQ’s) responsibility to conduct CER by creating the Effective Health Care Program.12 In 2009, the Institute of Medicine released a major report detailing priorities for CER. Later that year, the American Recovery and Reinvestment Act (ARRA) authorized $1.1 billion to fund CER.13 In 2010, Congress authorized the Patient Centered Outcomes Research Institute (PCORI) to fund CER that engages patients and other stakeholders throughout the research process as part of the Affordable Care Act (ACA). According to the authorizing legislation, “The purpose of the Institute is to assist patients, clinicians, purchasers, and policy-makers in making informed health decisions by advancing the quality and relevance of evidence concerning the manner in which diseases, disorders, and other health conditions can effectively and appropriately be prevented, diagnosed, treated, monitored, and managed through research and evidence synthesis.” As of 2020, PCORI had invested nearly $2.6 billion in more than 700 patient-centered CER studies14 (see box below). Though the U.S. has increased its investment in CER, research alone is not enough to mitigate waste, limit cost growth and improve outcomes.
ExampleOne PCORI-funded study on diabetes treatments found no statistically significant differences between type 2 noninsulin-treated diabetics who performed self-monitoring and those who did not.15 Over five years, discontinuing self-monitoring in this population would save more than $12 billion in healthcare costs.16 However, these savings depend on all elligible patients not testing their blood sugar daily. |
Undertaking CER alone does not necessarily save money or lead to better outcomes. Even with an evidence-base in place, it can sometimes take upwards of 17 years to get study results into practice17 (see box below). Studies have shown that disseminating results via clinical practice guidelines led to initial increases in utilization of effective therapies. However, less effective treatments did not replace more effective ones as the standard of care, potentially indicating a need for financial and non-financial provider incentives.18
Even simple protocols like requiring physician justification of medical necessity or creating checklists to remind providers to prescribe certain medications can have a significant impact on outcomes. Intermountain Healthcare implemented a checklist recommending that physicians provide a specific type of heart medication after CER studies revealed effectiveness. This simple protocol reduced deaths from congestive heart failure by 23 percent and saved $3.5 million a year.20
ExampleOne RCT comparing diuretics, Angiotensin Converting Enzyme (ACE) inhibitors, calcium channel blockers and alpha blockers for the treatment of hypertension found that diuretics were more effective than alternative treatments, in addition to being less expensive. Sadly, these findings had very little effect on prescribing patterns.19 |
Financial Incentives
Decisions about coverage, benefit design and provider payment can influence the pace of adoption. The secret may lie in changing the way we pay for care (see box below). For example, payers could offer bonus payment to providers who deliver clinically effective treatments.21 Other strategies rely on coverage determinations, like using step therapy to encourage the use of certain therapies over others or value-based insurance designs that limit coverage or increase cost sharing for therapies that have not demonstrated clinical benefit. However, critics warn that these strategies could be seen as limiting access to care.22
In addition to changing the scope of covered treatments, payers can change reimbursement policies using CER findings. For example, if a treatment produces evidence of superior clinical effectiveness, Medicare could pay providers based on usual pricing, while ones that produce insufficient evidence could be paid via dynamic pricing. In other words, payments would be set according to the current cost-plus reimbursement formulas, which involve predetermined margins, and are reassessed after three years. If the treatment was still unable to demonstrate clinical advantages, payment would be lowered to Medicare reimbursement rates for a relevant alternative option.23 For example, after Medicare set higher reimbursement rates for intensity-modulated radiation therapy than three-dimensional therapy, providers around the country abandoned conventional three-dimensional therapy.24
ExamplePhysician engagement and financial incentives led to the elimination of early elective birth inductions (before week 39 of a pregnancy) after the American College of Obstetricians and Gynecology found that early inductions lead to poor outcomes, including increases in neonatal intensive care unit (NICU) admissions and ventilator usage. Intermountain worked with SelectHealth to cease paying for non-medically indicated inductions prior to 39 weeks. Clinical leaders held meetings to garner support for the goal to eliminate all early elective inductions. SelectHealth also created a program for new mothers focused on providing prenatal help and education.25 As a result, early elective inductions dropped from 28 percent of all elective inductions to zero percent and resulted in shorter labor, fewer C-sections and cost savings of $2.5 million a year. |
Non-Financial Incentives
Non-financial incentives may also drive adoption of CER results. These can focus on peer comparisons, peer recognition, eliminating barriers and providing institutional support and leadership.26 Educating providers and medical students is key to getting CER results into practice.
CER can highlight services that might be better for certain patients, leading to a more personalized approach. However, the treatment decision is not the provider’s alone to make. Patient shared decision-making (PSDM) is a process that goes beyond traditional informed consent in healthcare—it is an interpersonal, interdependent process in which healthcare providers and patients collaborate to make decisions about the care that patients receive. Shared decision-making not only reflects medical evidence and providers' clinical expertise, but also the unique preferences and values of patients and their families. There is strong evidence that PSDM improves outcomes and increases patient and physician satisfaction, and should become the standard of care.27 However, for patients to be involved in their care decisions, there needs to be evidence comparing the effectiveness of treatments and data on how a specific patient might respond to different treatments.28
Effective take-up of CER findings can go beyond provider- and patient-focused efforts. Research from the Alliance of Community Health Plans (ACHP) found the collaboration between health plans, physicians and communities sped up the adoption of evidence-based care. ACHP has highlighted best practices to accelerate uptake.29
Insufficient investments in comparative effectiveness research undermine our nation’s efforts to produce better value and more equitable outcomes from our healthcare system. Evidence about which treatments work best—and for which types of patients—provide the foundation for our value-based provider payment efforts, patient-shared decision-making, quality measurement and much more.
Increased, targeted investments in CER are essential to achieving a high-value, patient-centered healthcare system. It is likely these investments will “pay us back” in terms of future savings and better outcomes. However, CER’s impact also depends, in large part, on getting the results into provider treatment and prescribing practices. Fortunately, research highlighting strategies that lead to the effective dissemination of CER results can guide the way. A variety of financial and non-financial incentives can be used to influence provider behavior and promote the adoption of evidence-based care.
1. National Academies of Sciences, Engineering, and Medicine, Initial National Priorities for Comparative Effectiveness Research, Washington, D.C. (2009).
2. Kliff, Sarah, “Surprise! We Don't Know if Half our Medical Treatments Work,” Washington Post (Jan. 24, 2013).
3. McTigue, Kathleen M., et al., “Comparing the 5-Year Diabetes Outcomes of Sleeve Gastrectomy and Gastric Bypass: The National Patient-Centered Clinical Research Network (PCORNet) Bariatric Study,” JAMA, Vol. 155, No. 5 (March 2020).
4. Katkade, Vaibhav B., Kafi N. Sanders, and Kelly H. Zou, “Real World Data: an Opportunity to Supplement Existing Evidence for the Use of Long-Established Medicines in Health Care Decision Making,” Journal of Multidisciplinary Healthcare, Vol. 11 (July 2018).
5. Blumenthal, Daniel M., et al., “Real-World Evidence Complements Randomized Controlled Trials in Clinical Decision Making,” Health Affairs Blog (Sept. 27, 2017).
6. Becker's Hospital Review, ONC to Congress: EHR Adoption is High, But Barriers to Interoperability Remain, (Accessed on June 3, 2020).
7. Staren, Dakota, and Sunita Krishnan, The Consumer Benefits of Patient Shared Decision Making, Healthcare Value Hub, Washington, D.C. (May 2019).
8. Healthcare Value Hub, Pay for Performance (P4P), (Accessed on June 3, 2020).
9. Healthcare Value Hub, Value-Based Insurance Design, (Accessed June 3, 2020).
10. National Academies of Sciences, Engineering, and Medicine (2009).
11. Berwick, Donald M., and Andrew D. Hackbarth, “Eliminating Waste in U.S. Health Care,” JAMA, Vol. 307, No. 14 (April 11, 2012); Shrank, William H., Teresa L. Rogstad, and Natasha Parekh, “Waste in the U.S. Health Care System: Estimated Costs and Potential for Savings,” JAMA, Vol. 322, No. 15 (Oct. 7, 2019).
12. Price-Haywood, Eboni G., “Clinical Comparative Effectiveness Research Through the Lens of Healthcare Decisionmakers,” Ochsner Journal, Vol. 15, No. 2 (June 2015).
13. Harrington, Scott E., and Alan B. Miller, Incentivizing Comparative Effectiveness Research, Ewing Marion Kauffman Foundation, Kansas City, MO (Jan. 15, 2011).
14. Patient-Centered Outcomes Research Institute, PCORI Board Approves New $150 Million Initiative to Fund Large-Scale Patient-Centered Clinical Studies. Press release (March 2, 2020).
15. Young, Laura A., et al, “Glucose Self-Monitoring in Non-Insulin-Treated Patients With Type 2 Diabetes in Primary Care Settings: A Randomized Trial,” JAMA Internal Medicine, Vol. 177, No. 7 (July 2017).
16. Patient-Centered Outcomes Research Institute, Addressing Type 2 Diabetes, (Accessed on June 3, 2020).
17. Alliance of Community Health Plans, Fact Sheet: Accelerating Adoption of Evidence-Based Care: Payer-Provider Partnerships, Washington, D.C. (2018).
18. Gibson, Teresa B., et al., “Real-Worl Impact of Comparative Effectiveness Research Findings on Clinical Practice,” American Journal of Managed Care, Vol. 20, No. 6 (June 2014).
19. Hussey, Peter S., Increase the Use of Comparative Effectiveness, RAND Corporation, Santa Monica, CA (2009).
20. Bernstein, Jeffrey, The Facts About Comparative Effectiveness Research: How Studying Which Treatments Work Can Improve Care and Reduce Costs, U.S. PIRG Education Fund, Denver, CO (July 2009).
21. Hussey (2009).
22. Ibid.
23. Pearson, Steven D., and Peter B. Bach, “How Medicare Could Use Comparative Effectiveness Research in Deciding on New Coverage and Reimbursement,” Health Affairs, Vol. 29, No. 10 (October 2010).
24. Ibid.
25. Alliance of Community Health Plans, Eliminate Inappropriate Early Inductions: SelectHealth: Salt Lake City, UT, (Accessed on June 3, 2020).
26. More information on these types of incentives can be found in our research brief, Hunt, Amanda, Non-Financial Provider Incentives: Looking Beyond Provider Payment Reform, Healthcare Value Hub, Washington, D.C. (February 2018).
27. Staren (May 2019).
28. For example, see this study comparing Roux-en-Y gastric bypass, sleeve gastrectomy and adjustable gastric banding procedures. Though sleeve gastrectomy has become the standard of care, prior to the study, no long-term data comparing gastric bypass and sleeve gastrectomy existed. Researchers found that gastric bypass surgery performed much better on diabetes relapse rates, even though both procedures performed similarly on diabetes remission rates. Arterburn, David, et al., “Comparative Effectiveness and Safety of Bariatric Procedures for Weight Loss: A PCORnet Cohort Study,” Annals of Internal Medicine, Vol. 169, No. 11 (Dec. 4, 2018).
29. Alliance of Community Health Plans (2018).