The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials

Date & Time
Tuesday, September 5, 2023, 2:35 PM - 2:45 PM
Location Name
Churchill
Session Type
Oral presentation
Category
Bias
Oral session
Bias and certainty of evidence
Authors
Barker T1, Stone J2, Sears K3, Klugar M4, Leonardi-Bee J5, Aromataris E2, Munn Z1
1Health Evidence Syntehsis, Recommendations and Impact, School of Public Health, The University of Adelaide, Australia
2JBI, Faculty of Health and Medical Sciences, The University of Adelaide, Australia
3Queen’s Collaboration for Health Care Quality, Queen’s University, Canada
4Czech National Centre for Evidence-Based Healthcare and Knowledge Translation (Cochrane Czech Republic, The Czech Republic (Middle European) Centre for Evidence-Based Healthcare: A JBI Centre of Excellence, Masaryk University GRADE Centre), Czech Republic
5The Nottingham Centre for Evidence Based Healthcare: A JBI Centre of Excellence, School of Medicine, University of Nottingham, United Kingdom
Description

Background: JBI (formally known as the Joanna Briggs Institute) offers a suite of critical appraisal instruments that are freely available to evidence synthesisers. These instruments have been developed by JBI and collaborators and approved by the JBI Scientific Committee following extensive consultation. Following recent developments within the science of risk of bias assessment, it has been acknowledged that the existing suite of instruments are not aligned to these developments and conflate and confuse the process of critical appraisal with that of risk of bias assessment.
Objectives: Here, we introduce the revised critical appraisal tool for randomized controlled trials (RCTs) and detail the key changes made from the previous iteration.
Results:
Methods: The JBI Effectiveness Methodology Group (EMG) began the update procedure by cataloguing the questions asked in each JBI critical appraisal tool for study designs that employ quantitative data. These questions were ordered into constructs of validity (internal, statistical conclusion, comprehensiveness of reporting, external) following DELPHI-like methods. For questions that were related to the internal validity construct, they were further catalogued to a domain of bias through a series of mapping exercises. Finally, questions were then separated based on whether they were answered at the study, outcome, or result level. Findings: A strength of the JBI critical appraisal instruments has been their flexibility to facilitate assessments of risk of bias following different approaches. However, due to their presentation, using the tools following a domain-based approach was not intuitive to all users. By presenting the questions to the validity of construct they belong and the domain of bias they address, users of the tool are better placed to follow a domain-based approach using these new tools if they choose. The revised instrument also identifies which questions address internal validity and which questions should be addressed at different hierarchal levels.
Conclusions: The revision to the JBI critical appraisal instruments aims to provide greater flexibility to users of these tools. It is expected that this work will increase the usability and applicability of these instruments while maintaining consistency with modern advances in evidence synthesis.