Insights / Articles
A Review of the 2024 Quantitative Evidence Synthesis Guidelines: Key Updates and Future Directions
Written by Sonja Kroep and Dorothea Heldt on Tuesday, July 16, 2024
In preparation for the implementation of the Joint Clinical Assessments (JCA), the Member State Coordination Group (HTACG) has released two guidance documents on quantitative evidence synthesis. These guidelines, which update EUnetHTA guidelines, offer recommendations on terminology, scope, and methodology, with a particular focus on handling non-randomized evidence, assessing various methods, and defining target estimands. This article will delve into the key points of one of these documents, the Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons.
Compared with the EUnetHTA guidelines, the new methodological guidelines offer of prognostic variables, effect modifiers, and confounders at the outset, prompting readers to consider their application throughout the assessment. The guidelines also acknowledge a wider range of evidence sources, including single-arm trials and observational studies, while still emphasizing the need for caution due to potential biases.
In the case of network meta-analyses (NMAs), the guidelines urge that when creating a network diagram based on the scope of the decision problem, one should keep the network as small as possible by excluding additional comparators and studies beyond those required to connect the network.
The inclusion of non-randomized evidence
The Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons provides guidance on assessing non-randomized studies and adjusting for any biases in them, such as through propensity score matching. The inclusion of non-randomized evidence such as that derived from single-arm trials is dependent on access to individual patient-level data (IPD), which can be used in population-adjusted indirect treatment comparisons (PAITCs), such as simulated treatment comparisons (STCs) or matching-adjusted indirect comparisons (MAICs), to adjust for confounding bias. However, unanchored comparisons with aggregated data are seen as problematic, and non-randomized evidence should only be used if IPD is available for all included studies. As IPD is often only available for one of the included studies, conducting STCs or MAICs including non-randomized evidence will be challenging. Furthermore, results from STCs and MAICs will only be accepted if the effect size is sufficient in such a way that the possibility of unidentified confounders or effect modifiers causing the effect can be dismissed. Using non-randomized evidence and population-adjusted methods will be associated with a high uncertainty of acceptance.
This concern is not new—it was also raised by the EUnetHTA guidelines. The HTACG’s contribution is an example of a statistical test to assess the potential impact of unmeasured confounders. While the importance of evaluating the treatment effect estimate for bias beyond statistical imprecision (e.g., due to confounders) is generally highlighted, the HTACG’s emphasis on testing additional systematic uncertainty is conflicting, especially given the acknowledged lack of consensus on decision-making thresholds. Introducing a critical test without clear guidance on its utilization, due to the absence of consensus, seems counterintuitive and counterproductive at this point.
Adopting innovative methods
The guidelines lack detailed descriptions of more complex methods such as baseline-risk adjustment, multivariate NMAs, and shared parameter models. Additionally, there is no specific guidance on using or reporting inconsistency methods, which are crucial for our field. While the updated guidelines do not address multilevel network meta-regression, the recent application of time-to-event outcomes represents a significant advancement. Although the guidelines provide references for innovative methods used in to estimate treatment effects, they lack explicit encouragement for HTA bodies to adopt those approaches. Instead of simply stating whether methods are commonly used, the guidelines could advocate for data-driven decision-making. This would encourage the use of novel methods that are appropriate for the specific context of the data, indication, scope, and broader considerations, rather than always defaulting to the “gold standard.”
In conclusion, while the 2024 update offers valuable refinements of the EUnetHTA guidelines, there are still areas that require further clarification and development, such as the handling of multiple PICOs and the application of complex methods. Referencing emerging methodologies in the guidelines would encourage health technology developers and researchers to consider cutting-edge solutions, potentially enhancing the quality and rigor of indirect treatment comparisons.
Working in partnership with our clients, we embrace our different perspectives and strengths to deliver fresh thinking and solutions that make a difference.
Together we can unlock possibilities.
For information about OPEN Health’s services and how we could support you, please get in touch.