A tool to assess risk of bias in studies estimating the prevalence of mental health disorders (RoB-PrevMH)
2Department of Psychiatry and Psychotherapy, Klinikum rechts der Isar, School of Medicine, Technical University of Munich, Munich, Germany
3Department of Ophthalmology, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
4Department of Health Promotion and Human Behavior, Kyoto University Graduate School of Medicine / School of Public Health, Kyoto, Japan
5Department of Psychiatry, University of Oxford. Oxford Precision Psychiatry Lab, NIHR Oxford Health Biomedical Research Centre. Oxford Health NHS Foundation Trust, Warneford, Hospital, Oxford, United Kingdom
6Department of Psychiatry and Psychotherapy, Klinikum rechts der Isar, School of Medicine, Technical University of Munich, Munich, Germany
7Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
Background: Studies of prevalence provide essential information for estimating the burden of mental health conditions, which can inform research and policymaking. The Coronavirus Disease 2019 (COVID-19) pandemic has generated a large volume of literature on the prevalence of various conditions, including those related to mental health. Biases affect how certain we are about the available evidence. It is one of the essential steps when conducting a systematic review; however, no standard tool for assessing the risk of bias (RoB) in prevalence studies exists.
Objectives: For the purposes of a living systematic review on prevalence of mental health disorders during the COVID-19 pandemic, we developed a RoB tool to evaluate prevalence studies in mental health (RoB-PrevMH) and tested its interrater reliability.
Methods: We reviewed existing RoB tools for prevalence studies until September 2020 to develop a tool for prevalence studies in mental health. We tested the reliability of assessments by different users of RoB-PrevMH in 83 studies stemming from two systematic reviews of prevalence studies in mental health. We assessed the interrater agreement by calculating the proportion of agreement and Kappa statistic for each item.
Results: RoB-PrevMH consists of three items that address selection bias and information bias. Introductory and signaling questions guide the application of the tool to the review question. The interrater agreement for the three items was 83%, 90%, and 93%. The weighted kappa was 0.63 (95% CI 0.54 to 0.73), 0.71 (95% CI 0.67 to 0.85) and 0.32 (95% CI –0.04 to –0.63), respectively.
Conclusions: RoB-PrevMH can determine if selection or information biases are present when the prevalence of mental health disorders is measured. Our tool aims to approach bias accurately by excluding reporting questions. The tool’s validity, reliability, and applicability should be assessed in future projects.