Development and validation of an Objective Structured Clinical Examination (OSCE) model for assessing Evidence-Based Medicine (EBM) competency

Date & Time
Wednesday, September 6, 2023, 12:30 PM - 2:00 PM
Location Name
Pickwick
Session Type
Poster
Category
Others
Authors
Chi C1, Lin H2, Lin Y2, Kuo L2
1Chang Gung Memorial Hospital, Linkou, Taiwan
2Chang Gung Memorial Hospital, Chiayi, Taiwan
Description

Background: Evidence-based medicine (EBM) is an essential competency for health care professionals. However, it is challenging to assess the learning effectiveness of EBM education.
Objectives: To develop and validate an Objective structured clinical examination (OSCE) model for assessing EBM competency.
Methods: We drafted a five-station OSCE model to assess examinees’ EBM competency: (1) question forming (history taking and diagnosis of the health problem of a standardised patient with proposal of a structured PICO question); (2) information searching; (3) literature reading; (4) critical appraisal; and (5) application. The scoring sheet for the ‘information searching’ station was adapted from the scale for measuring evidence-searching capability that we developed previously. We recruited five experts in EBM and OSCE and employed a modified Delphi method to revise and validate our draft scoring sheets used in the OSCE assessment for EBM competency and calculated the item-content validity index (I-CVI). We considered a consensus was achieved when all scoring items were rated ≥ 3 by all experts with an interquartile range of ≤ 1. By holding two OSCE examinations on 30 postgraduate year students and registrars, two examiners independently used the scoring sheets to assess examinees’ EBM competency at each station for intra-rater analysis. One examiner scores these capacities by videotape at different time points for inter-rater ICC analysis. We analysed the reliability and validity of the scoring sheets and calculated intra-cluster correlation coefficient (ICC) for inter- and intra-rater reliability.
Results: After two rounds of modified Delphi process, the consensus on scoring sheets was achieved, with each scoring sheet composed of 10 items. The I-CVI of these items ranged from 0.83 to 1.00. The inter-rater ICC was 0.755 for the ‘question forming’ station, 0.788 for the ‘information searching’ station, 0.913 for the ‘critical appraisal-systematic review’ station, 0.932 for the ‘critical appraisal-randomised control trials’, and 0.860 for the ‘application’ station. The intra-rater ICC for these stations was 0.816, 0.954, 0.830, 0.929, and 0.871, respectively.
Conclusions: We developed an OSCE model assessing different aspects of EBM competency. It can be used to assess students’ EBM learning effectiveness and as a reference in developing milestones in EBM competency.