Methodological quality assessment should move beyond design specificity

Date & Time
Monday, September 4, 2023, 12:30 PM - 2:00 PM
Location Name
Pickwick
Session Type
Poster
Category
Bias
Authors
Stone J1, Doi S2
1JBI, Australia
2Qatar University, Qatar
Description

Background: Tools used to assess the potential for bias in research are usually design specific. The limitation of design specific tools is that they ignore design-related safeguards and are not useful for assessment across study designs.
Objectives: This study aims to assess the utility of a unified tool (MASTER) for bias assessment against design specific tools in terms of content and coverage.
Methods: Each of the safeguards in the design specific tools were compared and matched to safeguards in the unified MASTER scale. The design specific tools were the JBI, Scottish Intercollegiate Guidelines Network (SIGN), and the Newcastle-Ottawa (NOS) tools for analytic study designs. Duplicates, safeguards that could not be mapped to the MASTER scale, and items non-applicable as safeguards against bias were flagged and described.
Results: Many safeguards across the JBI, SIGN, and NOS tools were common, with a minimum of 10 to maximum of 23 unique safeguards across various tools. These three design specific toolsets were missing 14 to 26 safeguards from the MASTER scale. The MASTER scale had complete coverage of safeguards within the three toolsets for analytic designs.
Conclusions: The MASTER scale provides a unified framework for bias assessment of analytic study designs, has good coverage, avoids duplication, has less redundancy and is more convenient when used for methodological quality assessment in evidence synthesis. It also allows assessment across designs which cannot be done using a design specific tool. Relevance and importance to patients: Identifying research studies at risk of bias is crucial for the reliable translation of evidence into policy and practice. A unified approach is useful when there are several designs within an evidence synthesis as studies can be down-ranked based on design specific safeguards. A systematic review should attempt to identify all of the available evidence and therefore this approach is useful for synthesizing the results of different study designs in a systematic review.