The Application of PROBAST and Prevalence of Unfavorable Risk of Bias in Systematic Reviews of Prediction Models

Date & Time
Wednesday, September 6, 2023, 2:15 PM - 2:25 PM
Location Name
Victoria
Session Type
Oral presentation
Category
Overviews of reviews and scoping reviews
Oral session
Diagnostic Test Accuracy and prognostic evidence
Authors
Yang Y1, Meng X1, Lu Y2, Liao J3, Zhang X4, Wang S1, Wang J5
1Department of Epidemiology and Biostatistics, School of Public Health, Peking University, Beijing, China
2School of Public Health, Capital Medical University, Beijing, China
3School of Health Humanities, Peking University, Beijing, China
4College of Science, Wuhan University of Science and Technology, Wuhan, China
5Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands
Description

Background: The Prediction Model Risk Of Bias ASsessment Tool (PROBAST) has been widely used for appraising prediction models in reviews since its publication.
Objectives: The study aimed to explore the problems when using PROBAST and investigate the prevalence of unfavorable risk of bias (ROB) in existing reviews.
Methods: Reviews in English were searched in PubMed and Web of Science up to April 17, 2022. Studies were eligible if they (1) used PROBAST; (2) included at least one model; and (3) reported ROB results. Two reviewers screened the search results independently, with disagreement resolved by discussion. Data were extracted by one reviewer and checked by another. The details of PROBAST were collected, including use of the CHARMS Checklist, number of ROB evaluators, method of ROB evaluating, inter-rater agreement, and the reporting levels of PROBAST results. Number and percentage were calculated for different ROB of each domain and signaling question in all prediction models.
Results: A total of 201 reviews and 9,652 prediction models were included in the study. When using PROBAST, about 103 (51.2%) reviews did not use CHARMS checklist. About 84 (41.8%) and 81 (40.3%) reviews did not report the number of ROB evaluators and ROB evaluating methods. Of 192 reviews possibly assessing ROB by double check, about 182 (94.8%) did not report results of inter-rater agreement. For reporting levels of PROBAST results, about 16 (8.0%) reviews did not report ROB of individual prediction models. Within the rest 185 reviews, 151 (81.6%) reviews did not report ROB on signaling question level. For the results of PROBAST, the highest percentages of unfavorable ROB were reported in analysis domain (high: 74.6% and unclear: 10.4%) (Figure 1). Furthermore, the signaling questions about missing data handle and model fitting reported the highest percentage of unfavorable ROB (high: 52.6% and unclear: 39.6%) (Figure 2), respectively.
Conclusions: Inappropriate usage and inadequate reporting remained in many reviews when using PROBAST, and high prevalence of unfavorable ROB was observed. In the future, researchers should standardize the use and reporting of PROBAST and improve the quality of prediction models when they develop, validate, or update prediction models. Patient, public, and/or healthcare consumer involvement: No.

Figure 1.jpg
Figure 2.jpg