print-icon.png

email-icon.png

 

 

Improving Value

Reporting Medical Harm

A critically important area of excess spending and patient suffering is connected to care that is not only unnecessary but actually harms patients. Among other things, medical harm includes:

  • serious reportable events—more commonly called “never events;”1
  • healthcare acquired conditions (HACs);
  • healthcare-acquired infections (HAIs);
  • medication errors; and
  • diagnostic errors.

The strategies that help reduce patient harm are fairly well-understood but unevenly implemented. Medical harm reporting is widely touted as a best practice to lead to quality improvement and ultimately reduce the frequency of medical harm events, although it remains one policy tool among many.2 

What is Medical Harm Reporting?

Public reporting, in general, is seen as an important quality improvement strategy.  Medical harm reporting, or patient safety reporting, involves the detailed disclosure of patient safety events by the health personnel who were associated with the event.3 The disclosure of performance data can lead to healthcare quality improvement through three major pathways.4 

  • The change pathway: the use of evidence-based performance measures identifies specific deficiencies in healthcare quality, allowing for improvement in clinical outcomes.5 
  • The reputation pathway: after public disclosure, poor-performing providers are revealed and their reputations are negatively impacted. Concern about public image, through this pathway, incentivizes providers to improve the quality of their healthcare services.6 Providing peer comparisons to providers appears to show some promise in reducing harm and improving performance.7,8
  • The selection pathway: consumers choose providers through an assessment of the provider’s performance ratings relevant to the consumer’s outcome of interest. Providers are therefore incentivized to improve their performance to attract more healthcare consumers. This pathway relies on consumer behavior. 

Reporting medical harm typically starts with incident reports, which are submitted by providers involved in the event. These reports can be used at the local hospital level or submitted to nationwide databases. While today’s reporting systems largely pertain to events that have occurred in hospitals,9 reports can stem from any healthcare setting in which a patient has been treated.10 These systems are usually confidential and offer legal protection to reporters unless a crime or act of misconduct has occurred.11

As they are voluntarily reported, incident reporting systems are subject to selection bias. When compared to medical record review and direct observation, these systems capture only a portion of medical harm events and may not reliably document serious events.13,14

Incident reports can offer insights into medical harm by providing information on the frequency of events that resulted in harm. However, they do not provide information about the number of patients who were exposed to such an event but did not experience the anticipated negative health outcome, or “near misses.”15

Incident reports are ideally followed by Root Cause Analysis (RCA) to explore the root causes of performance issues and ideally prevent the medical harm event from reoccurring.16 However, RCAs are not fulfilling their theoretical potential—medical harm events often reoccur after an RCA has been conducted. Recommendations to strengthen the impact of RCAs include the prioritization of problems identified in RCAs, measurement of the effectiveness of corrective actions, and actual implementation of RCA recommendations.17

Required and Voluntary Reporting

Regulatory agencies of the government or accreditation and certification bodies impose reporting requirements on providers that are voluntary or mandatory, depending on the entity requesting the data. 

The federal agency Centers for Medicare and Medicaid Services (CMS) has reporting requirements for entities like acute care hospitals, long-term acute care hospitals, inpatient rehabilitation facilities and outpatient dialysis facilities.18 CMS offers financial incentives to providers that successfully disclose medical hrm events through its Quality Payment Program.19 

CMS makes selected hospital information public, including the frequency of hospital-acquired infections such as central line-associated bloodstream infections (CLABSI), catheter-associated urinary tract infections (CAUTI) and clostridium difficile, through Hospital Compare.20 Hospitals are compared to the national average and designated as no different, better or worse than the national benchmark.21 

CMS has received criticism about the reliability of their medical harm measures and quality ratings. For instance, ambulatory surgery centers are required to report different measures than nursing homes and hospitals, making it difficult to compare different types of providers.22

The National Practitioner Data Bank (NPDB) collects information on physician adverse rulings, malpractice and other problematic behaviors. The NPDB information is confidential and not publicly-accessible. Only federally-authorized eligible entities can report to or request information from the NPDB.23 For instance, medical malpractice payers must report their payments to the NPDB and hospitals, state medical and dental boards and other authorized groups can query the NPDB to access this information.24

State Requirements

In light of gaps in the federal reporting requirements, it is recommended that states build off the CMS requirements with their own requirements to foster a more local, multi-stakeholder, systems-level approach to reducing medical harm. At present, many states require medical harm reporting but state requirements vary greatly.25 Three states have voluntary reporting programs that are not regulated by government agencies. Georgia and West Virginia’s voluntary reporting programs are privately-supported and receive no state funding, though they have received federal grant funding. Oregon’s voluntary reporting program operates as a semi-independent state agency, created through state legislation and funded primarily through private funding. 

Independent Commissions

An independent not-for-profit institution called the The Joint Commission evaluates hospitals and other healthcare organizations that have voluntarily decided to seek accreditations or certifications. Its assessments focus primarily on patient safety and quality of care, including staff abilities to manage patient medications and prevent medical errors and the provider’s capacity to improve itself based on performance data. The Joint Commission makes its accreditation decisions publicly-accessible through the Quality Check site.26 

The Leapfrog Group collects data through its voluntary27 Leapfrog Ambulatory Survey Center (ASC) survey, which contains information about patient safety.28 The results are posted for public access in Leapfrog’s Compare Hospitals database. Leapfrog also uses data from its survey, CMS and other sources to create safety grades for acute-care hospitals that are publicly-accessible through the Leapfrog Hospital Safety Grade website.29 The safety grades – displayed to the public as A, B, C, D, or F – are computed through measures that include the frequency of preventable errors and infections.30 

Medical Harm Reporting Concerns

While medical harm reporting, especially when publicly-accessible, is considered to be effective in its ability to incentivize hospitals and other providers to improve performance,31,32 there are several concerns about the effectiveness in practice of the current reporting standards and measures. These include concerns about underreporting,33 low consumer use of patient safety data and unintended consequences, including the possibility that providers will choose not to treat high-risk patients out of concern for the hospital’s safety rating.34

Underreporting

There is widespread concern that medical harm reporting contains errors, suffers from serious underreporting and doesn’t reliably predict patient outcomes.35 A 2006 study of the barriers to incident reporting found that the issues include being unaware that the incident was relevant to the reporting system, fearful of disciplinary action or lack of support from coworkers or concerned that the system is not truly confidential. Limitations of reporting systems can also be related to the healthcare setting in which the incident occurred. The largest agreed-upon barrier to reporting was that providers did not receive feedback after reports were recorded.36 As a result, it has been suggested that incident reporting systems should be used to identify and prioritize patient safety issues rather than create valid rates or monitor changes in the frequency of medical harm events.37 

Moreover, a U.S. Department of Health and Human Services report found that providers did not report 86 percent of patient safety events that occurred. This was partly attributable to the staff being unsure of what constituted a medical harm event.38

Collection Efforts Too Narrowly Focused

Current reporting efforts focus on just a narrow subset of all the types of medical harm. Diagnostic errors including delayed and missed diagnoses are typically not reported.39 Moreover, CMS has different reporting requirements for different entities, including ambulatory surgery centers, nursing homes and hospitals.40 For instance, CMS requires that both acute care hospitals (ACHs) and inpatient rehabilitation facilities (IRFs) report CAUTI and CDI, but only requires ACHs to report CLABSI and MRSA.41,42

Low Consumer Use of Patient Safety Data

However, there are concerns about the frequency of consumers’ use of patient safety data. Public access and use of patient safety data is low, especially among vulnerable populations. Moreover, there is a lack of standardization of measures and definitions across hospital reporting datasets and data is often presented in a way that is confusing or overwhelming to consumers.43 If consumers make greater use of this data and make healthcare decisions based on the information contained in the datasets; this will, in turn, incentivize hospitals and other healthcare providers to improve their performance ratings.44

Unintended Consequences

There are concerns about the unintended consequences of patient safety reporting, including the potential for reporting requirements to affect providers’ selection of patients. For instance, providers may choose not to serve high-risk populations out of concern of a greater risk for negative outcomes that could impact their performance ratings. While risk adjustment techniques can be used to take the population into consideration, these may not alleviate the concerns of providers.45

Recommendations

Despite concerns about medical harm reporting, researchers find that these reports can accurately identify of performance issues and areas for improvement. For example, a 2019 report prepared for The Leapfrog Group by the Johns Hopkins Armstrong Institute for Patient Safety and Quality found that patients who were treated at hospitals with “D” and “F” safety grades had a 92 percent greater avoidable death risk than patients who were treated at hospitals with “A” safety grades.46 Nonetheless, policymakers and practitioners should take steps to improve medical harm reporting as a key component of an overall strategy to address medical harm

Improve and Standardize Reporting Guidelines and Measures

The lack of standardized measures for reports is a common critique of the current patient safety reporting process.47 CMS measures different patient safety indicators for different providers48 and has been accused of using unsuitable indicators to evaluate patient safety events.49 

Moreover, hospital ratings are inconsistent across different sources. The Leapfrog Group’s hospital safety grades often vary greatly from CMS’ ratings. These discrepancies make it difficult for consumers to determine which hospitals are high performing.50 Equally important, unclear ratings of overall performance can falsely indicate that a provider does not have major patient safety issues, removing any incentive for the provider to identify problems and improve upon its practices.

Encourage Reporting by Establishing Trust

For the personnel involved in a medical harm event, it is important to know that reporting the event is encouraged and will not be used to punish the providers. Respondents of the U.S. Agency for Healthcare Research and Quality (AHRQ) Patient Safety Surveys indicated that after reporting a medical harm event, it seemed that the focus was on the personnel who participated in the event, as opposed to using the report to identify issues with the process or other factors that could be used to prevent subsequent events. There is broad agreement that the goal of reporting is not to “shame and blame” but to work across stakeholders to identify patterns and craft data-driven interventions that prevent future harm.  Errors leading to preventable harm are almost always multifactorial.51

 

Notes

1. These errors are defined as "adverse events that are serious, largely preventable.” A list of “never events” is maintained by the National Quality Forum.

2. Anderson, James G., and Kathleen Abrahamson, “Your Health Care May Kill You: Medical Errors,” Studies in Health Technology and Informatics, Vol. 234: 13-17 (2017). 

3. Patient Safety Network, Reporting Patient Safety Events (September 2019).

4. Campanella, Paolo, et al., “The Impact of Public Reporting on Clinical Outcomes: A Systematic Review and Meta-Analysis,” BMC Health Services Research, Vol. 16, No. 296 (July 2016). 

5. Berwick, Donald M., Brent James, and Molly Joel Coye, “Connections Between Quality Measurement and Improvement,” Medical Care, Vol. 41, No. 1 (2003). 

6. Campanella, Paolo, et al., “The Impact of Public Reporting on Clinical Outcomes: A Systematic Review and Meta-Analysis,” BMC Health Services Research, Vol. 16, No. 296 (July 2016). 

7. The Commonwealth Fund, "Transparency and Public Reporting Are Essential for a Safe Health Care System," (March 2010).

8. Bornstein, David, "Hospitals Focus on Doing No Harm," The New York Times, (February 2016). 

9. U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, "Reporting Patient Safety Events" (September 2019).

10. Inglesby, Tom, and Leslie Proctor, "Incident Reporting Systems, Patient Safety & Quality Healthcare,"(October 2014).

11. Patient Safety Network, "Reporting Patient Safety Events(September 2019).

12. Ibid.

13. U.S. Department of Health and Human Services: Agency for Healthcare Research and Quality, "Reporting Patient Safety Events," (September 2019).

14. Levinson, Daniel R.," Hospital Incident Reporting Systems Do Not Capture Most Patient Harm," U.S. Department of Health and Human Services: Office of the Inspector General (January 2012).

15. U.S. Department of Health and Human Services: Agency for Healthcare Research and Quality, "Reporting Patient Safety Events," (September 2019).

16. UNC Institute for Healthcare Quality Improvement, "Root Cause Analysis."

17. National Patient Safety Foundation, "RCA2 Improving Root Cause Analyses and Actions to Prevent Harm," Boston, M.A. (January 2016).

18. Centers for Disease Control and Prevention, "CMS Requirements." 

19. Centers for Medicare and Medicaid Services Quality Payment Program, "MIPS Overview." 

20. Medicare.gov, "Measures and Current Data Collection Periods."

21. Medicare.gov, "Infections."

22. Castellucci, Maria, “Getting quality measures in alignment a difficult task,” Modern Healthcare (June 2019).

23. National Practitioner Data Bank, "What You Must Report to the NPDB." 

24. National Practitioner Data Bank, "NPDB Reporting Requirements and Query Access." 

25. Primary source augmented by additional Healthcare Value Hub review was Centers for Disease Control and Prevention, 2018 National and State Healthcare Associated Infections Progress Report (November 2019). https://www.cdc.gov/hai/data/portal/progressreport.html#Data_tables. Also, QuPS.org, "Comparison of US Programs: Adverse Events / Physician Reporting by States."

26. The Joint Commission, "Joint Commission FAQ Page."

27. The Leapfrog Group, "Health Care Ratings and Reports."

28. The Leapfrog Group, "Leapfrog ASC Survey Content."

29. The Leapfrog Group,"How Results Are Used." 

30. Leapfrog Hospital Safety Grade, "Frequently Asked Questions About the Leapfrog Hospital Safety Grade."

31. Berwick, Donald M., Brent James, and Molly Joel Coye, “Connections Between Quality Measurement and Improvement,Medical Care, Vol. 41, No. 1 (2003).

32. Campanella, Paolo, et al., “The Impact of Public Reporting on Clinical Outcomes: A Systematic Review and Meta-Analysis, BMC Health Services Research, Vol. 16, No. 296 (July 2016).

33. Evans, Sue M., et al., “Attitudes and Barriers to Incident Reporting: A Collaborative Hospital Study,” Quality and Safety in Healthcare, Vol. 15, No. 1 (February 2006).

34. James, Julia, “Public Reporting on Quality and Costs," Health Affairs (March 2012).

35. Johns Hopkins Medicine, "Public Reporting Measures Fail to Describe the True Safety of Hospitals,"(May 2016).

36. Evans, Sue M., et al., “Attitudes and Barriers to Incident Reporting: A Collaborative Hospital Study,” Quality and Safety in Healthcare, Vol. 15, No. 1 (February 2006).

37. Pronovost, Peter J., et al., “Improving the Value of Patient Safety Reporting Systems,” Advances in Patient Safety: New Directions and Alternative Approaches, Vol. 1 (August 2008).

38. Levinson, Daniel R., "Hospital Incident Reporting Systems Do Not Capture Most Patient Harm," U.S. Department of Health and Human Services: Office of the Inspector General (January 2012).

39. Allen, Marshall, "We’re Still Not Tracking Patient Harm," ProPublica (July 2014).

40. Castellucci, Maria, “Getting quality measures in alignment a difficult task,” Modern Healthcare (June 2019).

41. Centers for Disease Control and Prevention, National Healthcare Safety Network (NHSN), "Acute Care Hospitals (ACH)," (December 2019).

42. Centers for Disease Control and Prevention, National Healthcare Safety Network (NHSN), "Inpatient Rehabilitation Facilities (IRF)," (December 2019).

43. James, Julia, “Public Reporting on Quality and Costs," Health Affairs (March 2012).

44. Campanella, Paolo, et al., “The Impact of Public Reporting on Clinical Outcomes: A Systematic Review and Meta-Analysis,” BMC Health Services Research, Vol. 16, No. 296 (July 2016).

45. Ibid.

46. Morse, Susan, "Leapfrog report finds risk of death nearly doubles for patients in hospitals graded “D” or “F,”" Healthcare Finance (May 2019).

47. James, Julia, “Public Reporting on Quality and Costs," Health Affairs (March 2012).

48. Castellucci, Maria, “Getting quality measures in alignment a difficult task,” Modern Healthcare (June 2019).

49. Brady, Michael, “Medicare quality measures need improvement, says government watchdog,” Modern Healthcare (September 2019).

50. O’Donnell, Jayne, "Deadly errors, infections: When hospital ratings don’t align what should patients believe?" USA Today (May 2019).

51. Bernazzani, Sophia, "Tallying the High Cost of Preventable Harm," Costs of Care (October 2015).