Reproducibility in Systematic Reviews used to Inform Clinical Practice Guidelines

Date & Time
Monday, September 4, 2023, 12:30 PM - 2:00 PM
Location Name
Pickwick
Session Type
Poster
Category
Research integrity and fraud
Authors
Bennett A1, Shaver N1, Page M2, Thibault R3, Sulis G4, Little J1, Brouwers M1, Tricco A5, Moher D1
1Knowledge Synthesis and Application Unit, University of Ottawa, Canada
2Monash University, Australia
3Stanford University, USA
4University of Ottawa, Canada
5Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Unity Health Toronto, Canada
Description

Background: Published research is typically considered to be trustworthy. Research organizations use publications as one of the main ways to assess researcher performance for hiring, promotion and tenure. One assumption of trustworthy science is that an independent research team could reproduce similar results when conducting the same study. Several large-scale collaborations, however, have documented poor reproducibility across different disciplines. Poor reproducibility is particularly relevant in medicine but has not been well examined. Systematic reviews (SRs) are often used, and in some jurisdictions mandated, to inform clinical practice guidelines (CPGs) to guide patient management. If SRs used to inform CPGs cannot be reproduced, this might influence the guidance recommended by CPGs for patient care.
Objectives: The goal of this project is to investigate the reproducibility of SRs used in CPGs.
Methods: We will hand search all CPGs published by the National Institute for Health and Care Excellence (NICE; UK), the U.S. Preventive Services Task Force, the Canadian Task Force on Preventive Health Care, and the National Health and Medical Research Council, as well as more than 1000 CPGs from the Strategy for Patient Oriented Research (SPOR) Evidence Alliance asset map. From this sample, we will randomly select two CPGs per entity and identify the index SR (with meta-analysis) used in developing each respective CPG. If CPGs have used more than one source SR, we will randomly select one. We will also examine whether any of the selected SRs have been used in other CPGs. Planned Analysis: All SRs will be assessed for (1) their reporting quality (using Preferred Reporting Items for Systematic Reviews and Meta-Analyses [PRISMA] 2020) and methodological quality (using AMSTAR-2 [A Measurement Tool to Assess Systematic Reviews]); (2) their sharing of data, analytical code, and other materials (e.g., manual of psychological therapy) used in their meta-analysis; and (3) consistency in results of our re-analyses of all eight SRs (i.e., average effect sizes; statistical significance of results, and p-values) compared to the originally published ones. Expected Outcomes: This is an important study because very few researchers have focused on reproducibility issues within medicine. We will update our EQUATOR Canada Publication School module on reproducibility using the findings from this research. Patient involvement: We will collaborate with patient partners to obtain feedback on our analyses, interpretation, and communication of findings.