Is a new approach for rating the quality evidence of effect estimates derived from matched-adjusted indirect comparisons (MAIC) needed?

Date & Time
Monday, September 4, 2023, 2:55 PM - 3:05 PM
Location Name
Albert
Session Type
Oral presentation
Category
Statistical methods
Oral session
Statistical methods
Authors
Posadzki P1, Bajpai R2
1Kleijnen Systematic Reviews Ltd, UK
2Keele University, UK
Description

Background: In health technology assessments (HTAs), matching-adjusted indirect comparisons (MAICs) are used when head-to-head randomised studies comparing a drug (therapy) in question and a comparator, e.g., standard care in the treatment of a disease, are not available. MAICs use individual patient data (IPD) from trials of one treatment to match baseline summary statistics reported from trials of another treatment. MAICs use an approach similar to propensity score weighting, whereby treatment outcomes are compared across balanced trial populations. Although the Grading of Recommendations Assessment, Development and Evaluation (GRADE) and Confidence in Network Meta-Analysis (CINeMA) approaches to rate the quality of treatment effect estimates from network meta-analysis (NMA) have been suggested, it seems that an approach for MAICs is missing.
Objectives: a) To evaluate the prevalence of MAIC use in submissions to the National Institute for Health and Care Excellence (NICE); and b) to explore how to rate the quality/certainty of the evidence in MAIC using the currently available approaches.
Methods: Scoping searches of NICE website (without time restrictions) were conducted, and these will be supplemented with searches in Medline, Embase and Central. Prevalence data will be synthesised quantitatively. The existing GRADE and CINeMA approaches will be compared accounting for similarities and differences in the MAIC and Bucher methods.
Results: Preliminary findings suggest that MAICs are predominantly being used for reimbursement decisions in oncology. Worryingly, a large proportion of submissions to NICE rely on unanchored comparisons whereby a common comparator arm is missing, and these types of MAICs make much stronger assumptions and are widely regarded as unfeasible. The work is still ongoing; we will have more findings by September 2023 to present at the Colloquium.
Conclusions: Currently there is no guidance on how to rate the certainty of evidence of effect estimates obtained from MAICs. Although MAIC and Bucher methods share certain similarities, there are still distinct differences between the two. We believe a new approach is warranted, and determining certainty of evidence in MAICs may help decision-makers in making informed recommendations.