Quality and Safety in Health Care Journal

Correction: Its time for the field of geriatrics to invest in implementation science

Prusaczyk B, Burke RE. It’s time for the field of geriatrics to invest in implementation science. BMJ Qual Saf 2023;32:700-703.

In this article, the affiliation ‘Central and North West London NHS foundation Trust’ has been added to Simon Conroy and the funding statement has been updated to acknowledge the funding from the NHS Elect and Central and North West London NHS foundation Trust.

doi: 10.1136/bmjqs-2023-016263corr1

Why tackling overuse will not succeed without changing our culture

Tackling overuse in healthcare is now more necessary than ever. Movements such as Choosing Wisely and Preventing Overdiagnosis have highlighted that some healthcare services offer no added value and may even cause harm to patients. Estimates of overdiagnosis and overtreatment vary widely between services, providers and regions.1 2 Overuse is a persistent challenge in high-income countries and is increasingly recognised in low-income settings.3 Action is needed to prevent patient harm, reduce resource waste and preserve the limited time of healthcare professionals. In addition, since healthcare services have a significant environmental impact, minimising overuse can contribute to achieving climate goals.

De-implementation science

To accelerate the reduction of overuse, robust de-implementation science is essential.4 This field studies the drivers, strategies and processes involved in reducing or eliminating ineffective, unnecessary or harmful healthcare practices, and in replacing them with evidence-based, high-value alternatives. Rigorous...

Unreasonable effectiveness of training AI models locally

Sepsis remains a leading cause of morbidity and mortality worldwide.1 The use of artificial intelligence (AI), and particularly machine-learning (ML) approaches, to predict which patients are at risk for sepsis in the hospital may improve patient-centred outcomes through early recognition and timely antibiotics. Yet, despite major interest in the use of ML applications in sepsis care, there are only a handful of successful examples of model implementation that save lives through early detection.2 3 The high cost and extensive system architecture required to test and implement novel ML applications have limited many institutions’ abilities to bring these models to the bedside. Unfortunately, this has resulted in a preponderance of studies on model development rather than implementation and reliance on proprietary models disseminated to health systems without validation or testing. One well-known sepsis predictive model developed by an electronic health record vendor (Epic Systems,...

Relative importance and interactions of factors influencing low-value care provision: a factorial survey experiment among Swedish primary care physicians

Background

Low-value care (LVC) describes practices that persist in healthcare, despite being ineffective, inefficient or causing harm. Several determinants for the provision of LVC have been identified, but understanding how these factors influence professionals’ decisions, individually and jointly, is a necessary next step to guide deimplementation.

Methods

A factorial survey experiment was employed using vignettes that presented hypothetical medical scenarios among 593 Swedish primary care physicians. Each vignette varied systematically by factors such as patient age, patient request for the LVC, physician’s perception of this practice, practice cost to the primary care centre and time taken to deliver it. For each scenario, we measured the reported likelihood of providing the LVC. We collected information on the physician’s worry about missing a serious illness.

Results

Patient requests and physicians’ positive perceptions of the practice were the factors that increased the reported likelihood of providing LVC the most (by 14 and 13 percentage points (pp), respectively). When the LVC was low in cost or not time-consuming, patient requests further boosted the likelihood of provision by 29 and 18 pp. In contrast, credible evidence against the LVC reduced the role of patient requests by 11 pp. Physicians’ fear of missing a serious illness was linked with higher reported probability of providing LVC, and the credibility of the evidence against the LVC reduced the role of this concern.

Conclusions

The findings highlight that patient requests enhance the role of many determinants, while the credibility of evidence diminishes the impact of others. Overall, these findings point to the relevance of increased clinician knowledge about LVC, tools for patient communication and the use of decision support tools to reduce the uncertainty in decision-making.

False hope of a single generalisable AI sepsis prediction model: bias and proposed mitigation strategies for improving performance based on a retrospective multisite cohort study

Objective

To identify bias in using a single machine learning (ML) sepsis prediction model across multiple hospitals and care locations; evaluate the impact of six different bias mitigation strategies and propose a generic modelling approach for developing best-performing models.

Methods

We developed a baseline ML model to predict sepsis using retrospective data on patients in emergency departments (EDs) and wards across nine hospitals. We set model sensitivity at 70% and determined the number of alerts required to be evaluated (number needed to evaluate (NNE), 95% CI) for each case of true sepsis and the number of hours between the first alert and timestamped outcomes meeting sepsis-3 reference criteria (HTS3). Six bias mitigation models were compared with the baseline model for impact on NNE and HTS3.

Results

Across 969 292 admissions, mean NNE for the baseline model was significantly lower for EDs (6.1 patients, 95% CI 6 to 6.2) than for wards (7.5 patients, 95% CI 7.4 to 7.5). Across all sites, median HTS3 was 20 hours (20–21) for wards vs 5 (5–5) for EDs. Bias mitigation models significantly impacted NNE but not HTS3. Compared with the baseline model, the best-performing models for NNE with reduced interhospital variance were those trained separately on data from ED patients or from ward patients across all sites. These models generated the lowest NNE results for all care locations in seven of nine hospitals.

Conclusions

Implementing a single sepsis prediction model across all sites and care locations within multihospital systems may be unacceptable given large variances in NNE across multiple sites. Bias mitigation methods can identify models demonstrating improved performance across most sites in reducing alert burden but with no impact on the length of the prediction window.

Optimising antibacterial utilisation in Argentine intensive care units: a quality improvement collaborative

Background

There is limited evidence from antimicrobial stewardship programmes in less-resourced settings. This study aimed to improve the quality of antibacterial prescriptions by mitigating overuse and promoting the use of narrow-spectrum agents in intensive care units (ICUs) in a middle-income country.

Methods

We established a quality improvement collaborative (QIC) model involving nine Argentine ICUs over 11 months with a 16-week baseline period (BP) and a 32-week implementation period (IP). Our intervention package included audits and feedback on antibacterial use, facility-specific treatment guidelines, antibacterial timeouts, pharmacy-based interventions and education. The intervention was delivered in two learning sessions with three action periods along with coaching support and basic quality improvement training.

Results

We included 912 patients, 357 in BP and 555 in IP. The latter had higher APACHE II (17 (95% CI: 12 to 21) vs 15 (95% CI: 11 to 20), p=0.036), SOFA scores (6 (95% CI: 4 to 9) vs 5 (95% CI: 3 to 8), p=0.006), renal failure (41.6% vs 33.1%, p=0.009), sepsis (36.1% vs 31.6%, p<0.001) and septic shock (40.0% vs 33.8%, p<0.001). The days of antibacterial therapy (DOT) were similar between the groups (change in the slope from BP to IP 28.1 (95% CI: –17.4 to 73.5), p=0.2405). There were no differences in the antibacterial defined daily dose (DDD) between the groups (change in the slope from BP to IP 43.9, (95% CI: –12.3 to 100.0), p=0.1413).

The rate of antibacterial de-escalation based on microbiological culture was higher during the IP (62.0% vs 45.3%, p<0.001).

The infection prevention control (IPC) assessment framework was increased in eight ICUs.

Conclusion

Implementing an antimicrobial stewardship program in ICUs in a middle-income country via a QIC demonstrated success in improving antibacterial de-escalation based on microbiological culture results, but not on DOT or DDD. In addition, eight out of nine ICUs improved their IPC Assessment Framework Score.

Impact of a financial incentive on early rehabilitation and outcomes in ICU patients: a retrospective database study in Japan

Background

Early mobilisation of intensive care unit (ICU) patients has been recommended in clinical practice guidelines. Therefore, the Japanese universal health insurance system introduced an additional fee for early mobilisation and/or rehabilitation, which can be claimed by hospitals when starting rehabilitation of ICU patients within 48 hours after their ICU admission. However, the effect of this fee is unknown.

Objective

To measure the proportion of ICU patients who received early rehabilitation and the impact on length of ICU stay, the length of hospital stay and discharged to home after the introduction of the financial incentive (additional fee for early mobilisation and/or rehabilitation).

Design/methods

We included patients who were admitted to ICU within 2 days of hospitalisation between April 2016 and January 2020. We conducted interrupted time series analyses to assess the effects of the introduction of the financial incentive.

Results

The proportion of patients who received early rehabilitation immediately increased after the introduction of the financial incentive (rate ratio (RR) 1.293, 95% CI 1.240 to 1.349). The RR for proportion of patients received early rehabilitation was 1.008 (95% CI 1.005 to 1.011) in the period after the introduction of the financial incentive compared with period before its introduction. There was no statistically significant change in the mean length of ICU stay, the mean length of hospital stay and the proportion of patients who were discharged to home.

Conclusion

After the introduction of the financial incentive, the proportion of ICU patients who received early rehabilitation increased. However, the effects of the financial incentive on the length of ICU stay, the length of hospital stay and the proportion of patients who were discharged to home were limited.

WHO research agenda on the role of the institutional safety climate for hand hygiene improvement: a Delphi consensus-building study

Background

Creating and sustaining an institutional climate conducive to patient and health worker safety is a critical element of successful multimodal hand hygiene improvement strategies aimed at achieving best practices. Repeated WHO global surveys indicate that the institutional safety climate consistently ranks the lowest among various interventions.

Methods

To develop an international expert consensus on research agenda priorities related to the role of institutional safety climate within the context of a multimodal hand hygiene improvement strategy, we conducted a structured consensus process involving a purposive sample of international experts. A preliminary list of research priorities was formulated following evidence mapping, and subsequently refined through a modified Delphi consensus process involving two rounds. In round 1, survey respondents were asked to rate the importance of each research priority. In round 2, experts reviewed round 1 ratings to reach a consensus (defined as ≥70% agreement) on the final prioritised items to be included in the research agenda. The research priorities were then reviewed and finalised by members of the WHO Technical Advisory Group on Hand Hygiene Research in Healthcare.

Results

Of the 57 invited participants, 50 completed Delphi round 1 (88%), and 48 completed round 2 (96%). Thirty-six research priority statements were included in round 1 across five thematic categories: (1) safety climate; (2) personal accountability for hand hygiene; (3) leadership; (4) patient participation and empowerment and (5) religion and traditions. In round 1, 75% of the items achieved consensus, with 9 statements carried forward to round 2, leading to a final set of 31 prioritised research statements.

Conclusion

This research agenda can be used by researchers, clinicians, policy-makers and funding bodies to address gaps in hand hygiene improvement within the context of an institutional safety climate, thereby enhancing patient and health worker safety globally.

Cluster randomised evaluation of a training intervention to increase the use of statistical process control charts for hospitals in England: making data count

Background

The way that data are presented can influence quality and safety initiatives. Time-series charts highlight changes but do not clarify whether data lie outside expected variation. Statistical process control (SPC) charts make this distinction and have been demonstrated to be effective in supporting hospital initiatives. To improve the uptake of the SPC methodology by hospitals in England, a training intervention was created. The current study evaluates the effectiveness of that training against the background of a wider national initiative to encourage the adoption of SPC charts.

Methods

A parallel cluster randomised trial was conducted with 16 English NHS hospitals. Half were randomised to the training intervention and half to the control. The primary analysis compares the difference in use of SPC charts within hospital board papers in a postrandomisation period (adjusting for baseline use). Trainees completed feedback forms with Likert scale and open-ended items.

Results

Fifteen hospitals participated across the study arms. SPC chart use increased in both intervention and control hospitals between the baseline and postrandomisation period (29 and 30 percentage points, respectively). There was no statistically significant difference between the intervention and control hospitals in use of SPC charts in the postrandomisation period (average absolute difference 9% (95% CI –34% to 52%). In the feedback forms, 93.9% (n=31/33) of trainees affirmed learning and 97.0% (n=32/33) had formed an intention to change their behaviour.

Conclusions

Control chart use increased in both intervention and control hospitals. This is consistent with a rising tide and/or contamination effect, such that the culture of control chart use is spreading across hospitals in England. Further research is needed to support hospitals implementing SPC training initiatives and to link SPC implementation to quality and safety outcomes. Such research could support future quality and safety initiatives nationally and internationally.

Trial registration number

NCT04977414.

Grand rounds in methodology: improving the design of staggered implementation cluster randomised trials

The stepped-wedge cluster randomised trial is a popular design in implementation and health services research. All clusters, such as clinics or hospitals, start in the control state, and gradually switch over to treatment in a random order until all clusters have received the intervention. The design allows for the incorporation of an experiment into the gradual roll-out of an intervention across clusters. However, the traditional stepped-wedge layout may not be the best choice in many scenarios. In this article, we discuss modifications to the stepped-wedge design that maintain a staggered roll-out, but which may improve some key characteristics. We consider improving the timing of implementation periods, reducing the volume of data collection and allowing for the recruitment of clusters over the course of the trial.

Ending nuclear weapons, before they end us

This May, the World Health Assembly (WHA) will vote on re-establishing a mandate for the WHO to address the health consequences of nuclear weapons and war.1 Health professionals and their associations should urge their governments to support such a mandate and support the new United Nations (UN) comprehensive study on the effects of nuclear war.

The first atomic bomb exploded in the New Mexico desert 80 years ago, in July 1945. Three weeks later, two relatively small (by today’s standards), tactical-size nuclear weapons unleashed a cataclysm of radioactive incineration on Hiroshima and Nagasaki. By the end of 1945, about 213 000 people were dead.2 Tens of thousands more have died from late effects of the bombings.

Last December, Nihon Hidankyo, a movement that brings together atomic bomb survivors, was awarded the Nobel Peace Prize for its ‘efforts to achieve a world free of nuclear weapons...

Why hospital falls prevention remains a global healthcare priority

The article by Cho et al1 in the current issue of BMJ Quality and Safety addresses the persistent and debilitating problem of hospital falls, which remain a challenge worldwide. Despite decades of research on hospital falls,2 considerable effort by health professionals,3 and publication of clinical guidelines on falls prevention,4 5 falls and associated injuries continue to be a major threat to patient safety and quality. The reasons why hospital falls continue to be associated with injuries and increased hospital length of stay are incompletely understood and vary across patients and settings. What is known is that patient falls education early after hospital admission helps to prevent falls.6–8 Staff education on how to prevent hospital falls also helps to reduce the risk.9 Exercise, safe footwear, environmental modifications, use of assistive devices such...

Under-reporting of falls in hospitals: a multisite study in South Korea

Background

Inpatient falls are adverse events that often result in injury due to complex interactions between the hospital environment and patient risk factors and remain a significant problem in clinical settings.

Objectives

This study aimed to identify (1) practice variations and key issues ranging from hospital fall management protocols to incident detection, and (2) potential approaches to address these challenges.

Design

Retrospective cohort study.

Setting

Four general hospitals in South Korea.

Methods

Qualitative and quantitative data were analysed using the Donabedian quality outcomes model. Data were collected retrospectively during 2015–2023 from four general hospitals on local practice protocols, patient admission and nursing data from electronic records, and incident self-reports. Content analysis of practice protocol and manual chart reviews for hospital falls incidents was conducted at each site. Quantitative analyses of nursing activities and analysis of patient falls prevention interventions were also conducted at each site.

Results

There were variations in fall definitions, risk-assessment tools and inclusion and exclusion criteria among the local fall management protocols. The original and modified versions of the heuristic tools performed poorly to moderately, with areas under the receiver operating characteristic curve of 0.54~0.74 and 0.59~0.80, respectively. Preventive intervention practices varied significantly among the sites, with risk-targeted and tailored interventions delivered to only 1.15%~49.5% of at-risk patients. Fall events were not recorded in self-reporting systems and nursing notes for 29.5%~90.6% and 4.4%~17.1% of patients, respectively.

Conclusion

Challenges in fall prevention included weaknesses in the design and implementation of local fall protocols and low-quality incident self-reporting systems. Systematic and sustainable solutions are needed to help reduce hospital fall rates and injuries.

Frequency and preventability of adverse drug events in the outpatient setting

Background

Limited data exist regarding adverse drug events (ADEs) in the outpatient setting. The objective of this study was to determine the incidence, severity, and preventability of ADEs in the outpatient setting and identify potential prevention strategies.

Methods

We conducted an analysis of ADEs identified in a retrospective electronic health records review of outpatient encounters in 2018 at 13 outpatient sites in Massachusetts that included 13 416 outpatient encounters in 3323 patients. Triggers were identified in the medical record including medications, consultations, laboratory results, and others. If a trigger was detected, a further in-depth review was conducted by nurses and adjudicated by physicians to examine the relevant information in the medical record. Patients were included in the study if they were at least 18 years of age with at least one outpatient encounter with a physician, nurse practitioner or physician’s assistant in that calendar year. Patients were excluded from the study if the outpatient encounter occurred in outpatient surgery, psychiatry, rehabilitation, and paediatrics.

Results

In all, 5% of patients experienced an ADE over the 1-year period. We identified 198 ADEs among 170 patients, who had a mean age of 60. Most patients experienced one ADE (87%), 10% experienced two ADEs and 3% experienced three or more ADEs. The most frequent drug classes resulting in ADEs were cardiovascular (25%), central nervous system (14%), and anti-infective agents (14%). Severity was ranked as significant in 85%, 14% were serious, 1% were life-threatening, and there were no fatal ADEs. Of the ADEs, 22% were classified as preventable and 78% were not preventable. We identified 246 potential prevention strategies, and 23% of ADEs had more than one prevention strategy possibility.

Conclusions

Despite efforts to prioritise patient safety, medication-related harms are still frequent. These results underscore the need for further patient safety improvement in the outpatient setting.

Patient and caregiver perspectives on causes and prevention of ambulatory adverse events: multilingual qualitative study

Context

Ambulatory adverse events (AEs) affect up to 25% of the global population and cause over 7 million preventable hospital admissions around the world. Though patients and caregivers are key actors in promoting and monitoring their own ambulatory safety, healthcare teams do not traditionally partner with patients in safety efforts. We sought to identify what patients and caregivers contribute when engaged in ambulatory AE review, focusing on under-resourced care settings.

Methods

We recruited adult patients, caregivers and patient advisors who spoke English, Spanish and/or Cantonese, from primary care clinics affiliated with a public health network in the USA. All had experience taking or managing a high-risk medication (blood thinners, insulin or opioid). We presented two exemplar ambulatory AEs: one involving a warfarin drug-drug interaction, and one involving delayed diagnosis of colon cancer. We conducted semistructured focus groups and interviews to elicit participants’ perceptions of causal factors and potential preventative measures for similar AEs. The study team conducted a mixed inductive-deductive qualitative analysis to derive major themes.

Findings

The sample included 6 English-speaking patients (2 in the focus group, 4 individual interviews), 6 Spanish-speaking patients (individual interviews), 4 Cantonese-speaking patients (2 in the focus group, 2 interviews), and 6 English-speaking patient advisors (focus group). Themes included: (1) Patients and teams have specific safety responsibilities; (2) Proactive communication drives safe ambulatory care; (3) Barriers related to limited resources contribute to ambulatory AEs. Patients and caregivers offered ideas for operational changes that could drive new safety projects.

Conclusions

An ethnically and linguistically diverse group of primary care patients and caregivers defined their agency in ensuring ambulatory safety and offered pragmatic ideas to prevent AEs they did not directly experience. Patients and caregivers in a safety net health system can feasibly participate in AE review to ensure that safety initiatives include their valuable perspectives.

General practitioners retiring or relocating and its association with healthcare use and mortality: a cohort study using Norwegian national data

Background

Continuity in the general practitioner (GP)-patient relationship is associated with better healthcare outcomes. However, few studies have examined the impact of permanent discontinuities on all listed patients when a GP retires or relocates.

Aim

To investigate changes in the Norwegian population’s overall healthcare use and mortality after discontinuity due to Regular GPs retiring or relocating.

Methods

Linking national registers, we compared days with healthcare use and mortality for matched individuals affiliated with Regular GPs who retired or relocated versus continued. We included list patients 3 years prior to exposure and followed them up to 5 years after. We assessed changes over time employing a difference-in-differences design with Poisson regression.

Results

From 2011 to 2020, we identified 819 Regular GPs retiring and 228 moving, affiliated with 1 165 295 people. Relative to 3 years before discontinuity, the rate ratio (RR) of daytime GP contacts, increased 3% (95% CI 2 to 4) in year 1 after discontinuity, corresponding to 148 (95% CI 54 to 243) additional contacts per 1000 patients. This increase persisted for 5 years. Out-of-hours GP contacts increased the first year, RR 1.04 (95% CI 0.99 to 1.09), corresponding to 16 (95% CI –5 to 37) contacts per 1000 patients. Planned hospital contacts increased 3% (95% CI 2 to 4) in year 1, persisting into year 5. Acute hospital contacts increased 5% (95% CI 3 to 7), primarily in the first year. These 1-year effects corresponded to 51 (95% CI 18 to 83) planned and 13 (95% CI 7 to 18) acute hospital contacts per 1000 patients. Mortality was unchanged up to 5 years after discontinuity.

Conclusion

Regular GPs retirement and relocation were associated with small to moderate increases in healthcare use among listed patients, while mortality was unaffected.

Development of the Patient-Reported Indicator Surveys (PaRIS) conceptual framework to monitor and improve the performance of primary care for people living with chronic conditions

Background

The Organisation for Economic Co-operation and Development (OECD) Patient-Reported Indicator Surveys (PaRIS) initiative aims to support countries in improving care for people living with chronic conditions by collecting information on how people experience the quality and performance of primary and (generalist) ambulatory care services. This paper presents the development of the conceptual framework that underpins the rationale for and the instrumentation of the PaRIS survey.

Methods

The guidance of an international expert taskforce and the OECD Health Care Quality Indicators framework (2015) provided initial specifications for the framework. Relevant conceptual models and frameworks were then identified from searches in bibliographic databases (Medline, EMBASE and the Health Management Information Consortium). A draft framework was developed through narrative review. The final version was codeveloped following the participation of an international Patient advisory Panel, an international Technical Advisory Community and online international workshops with patient representatives.

Results

85 conceptual models and frameworks were identified through searches. The final framework maps relationships between the following domains (and subdomains): patient-reported outcomes (symptoms, functioning, self-reported health status, health-related quality of life); patient-reported experiences of care (access, comprehensiveness, continuity, coordination, patient safety, person centeredness, self-management support, trust, overall perceived quality of care); health and care capabilities; health behaviours (physical activity, diet, tobacco and alcohol consumption), sociodemographic characteristics and self-reported chronic conditions; delivery system characteristics (clinic, main healthcare professional); health system, policy and context.

Discussion

The PaRIS conceptual framework has been developed through a systematic, accountable and inclusive process. It serves as the basis for the development of the indicators and survey instruments as well as for the generation of specific hypotheses to guide the analysis and interpretation of the findings.

A realist review of how, why, for whom and in which contexts quality improvement in healthcare impacts inequalities

Introduction

Quality improvement (QI) is aimed at improving care. Equity is one of the six domains of healthcare quality, as defined by the Institute of Medicine. If this domain is ignored, QI projects have the potential to maintain or even worsen inequalities.

Aims and objectives

We aimed to understand why, how, for whom and in which contexts QI approaches increase, or do not change health inequalities in healthcare organisations.

Methods

We conducted a realist review by first developing an initial programme theory, then searching MEDLINE, Embase, CINAHL, PsychINFO, Web of Science and Scopus for QI projects that considered health inequalities. Included studies were analysed to generate context-mechanism-outcome configurations (CMOCs) and develop an overall programme theory.

Results

We screened 6259 records. Thirty-six records met our inclusion criteria, the majority of which were from the USA. We developed CMOCs covering four clusters: values and understanding, resources, data, and design. Five of these described circumstances in which QI may increase inequalities and 15 where it may reduce inequalities. We found that QI projects that are values-led and incorporate diverse, patient-led data into design are more likely to address health inequalities. However, when staff and patients cannot engage fully with equity-focused projects, due to practical or technological barriers, QI projects are more likely to worsen inequalities.

Conclusions

The potential for QI projects to positively impact inequalities depends on embedding equity-focused values across organisations, ensuring sufficient and appropriate resources are provided to staff delivering QI, and using diverse disaggregated data alongside considered user involvement to inform and assess the success of QI projects. Policymakers and practitioners should ensure that QI projects are used to address inequalities.

Time to de-implementation of low-value cancer screening practices: a narrative review

The continued use of low-value cancer screening practices not only represents healthcare waste but also a potential cascade of invasive diagnostic procedures and patient anxiety and distress. While prior research has shown it takes an average of 15 years to implement evidence-based practices in cancer control, little is known about how long it takes to de-implement low-value cancer screening practices. We reviewed evidence on six United States Preventive Services Task Force ‘Grade D’ cancer screening practices: (1) cervical cancer screening in women<21 years and >65 years, (2) prostate cancer screening in men≥70 years and (3) ovarian, (4) thyroid, (5) testicular and (6) pancreatic cancer screening in asymptomatic adults. We measured the time from a landmark publication supporting the guideline publication and subsequent de-implementation, defined as a 50% reduction in the use of the practice in routine care. The pace of de-implementation was assessed using nationally representative surveillance systems and peer-reviewed literature from the USA. We found the time to de-implementation of cervical cancer screening was 4 years for women<21 and 16 years for women>65. Prostate screening in men ≥70 has not reached a 50% reduction in use since the 2012 guideline release. We did not identify sufficient evidence to measure the time to de-implementation for ovarian, thyroid, testicular and pancreatic cancer screening in asymptomatic adults. Surveillance of low-value cancer screening is sparse, posing a clear barrier to tracking the de-implementation of these screening practices. Improving the systematic measurement of low-value cancer control practices is imperative for assessing the impact of de-implementation on patient outcomes, healthcare delivery and healthcare costs.

Economic evaluations of quality improvement interventions: towards simpler analyses and more informative publications

With public reporting and value-based payment, healthcare organisations have strong incentives to optimise quality of care, improve patient outcomes and lower costs.1 In response, organisations are implementing diverse and often novel quality improvement (QI) interventions (systematic efforts to improve the structure, process or outcome of care). Many organisations routinely assess the clinical effects and costs of QI interventions to support internal decisions about whether to discontinue, sustain or expand them.

These internal analyses create an opportunity for QI teams to publish their experiences and inform decision-making at peer organisations. Since QI interventions can be labour-intensive and thus costly, published economic evaluations are of great interest to leaders weighing decisions about whether to adopt them and how best to implement them. Published evaluations seek to answer a two-part question about the effectiveness and cost of a specific QI intervention at one healthcare organisation, with the goal of reporting...

Pages