Risk of bias and applicability assessments for overall prognosis studies (RoB-OPS): Current development status

Date & Time
Wednesday, September 6, 2023, 2:25 PM - 2:35 PM
Location Name
Session Type
Oral presentation
Oral session
Diagnostic Test Accuracy and prognostic evidence
Kreuzberger N1, Hirsch C1, Dorando E1, Moons K2, Riley R3, Wolff R4, Akl E5, Skoetz N1
1Evidence-based Medicine, Department I of Internal Medicine, University Hospital, and Faculty of Medicine, University of Cologne, Germany
2Julius Center for Health Sciences and Primary Care, Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
3Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
4Kleijnen Systematic Reviews Ltd, York, United Kingdom
5Department of Internal Medicine, American University of Beirut, Lebanon, Health Research Methods, Evidence, and Impact (HEI), McMaster University, Lebanon, Canada

Background: Overall prognosis (OP) refers to the average course or future outcomes of individuals with a particular exposure or a health-related condition. OP estimates are important tools for individualizing the estimation of benefits and harms of interventions, developing clinical practice recommendations, and guiding future research. To estimate OP, systematic reviews summarize prognosis outcome estimates extracted from various primary studies or data sources. Although the assessment of risk of bias (RoB) is an integral part of any systematic review, there is no specific tool for assessing RoB in OP estimates reported by primary studies.
Objectives: We aim to develop a tool to assess the RoB in OP estimates obtained from primary studies.
Methods: A steering group (StG) of eight experts on prognosis, RoB assessment, and systematic reviews developed a first draft based on available tools and refined it through iterative discussions. We obtained external feedback by surveying members of stakeholder groups. The StG is currently considering the survey feedback through iterative discussions.
Results: We decided to separate the tool into assessments for applicability and RoB. We address the domains “participants,” “outcome,” “analysis,” and “selective reporting” for RoB and the domains “participants’ setting” and “outcome” for applicability. We introduce each block of related questions with an elaboration box, followed by factually phrased signaling questions that can be answered with “yes,” “probably yes,” “no,” “probably no,” and “not sufficient information.” For the RoB assessment, the participants domain aims at detecting whether the participant sample of an OP study was selective by covering recruitment strategy and inappropriate exclusions of participants. The second domain assesses whether the outcome was defined and measured appropriately and whether the frequency of outcome assessment was appropriate. The analysis domain aims to detect flaws regarding attrition and analytical methods, and the last domain covers selective reporting.
Conclusions: Manifold discussions between StG members reflect the complexity of distinguishing between RoB and applicability. Currently, we are finalizing the remaining domains, the creation of flow charts to facilitate domain-based decision-making, piloting of the tool, and external consolidation via subsequent surveys.