Clinical Research & Data, Protected PCI

TCT 2022: Limitations & Opportunities of Observational Database Analyses

 

“What we’re going to be talking about here… is the type of data you run into when you’re trying to interpret the medical literature,” states Jay Giri, MD, MPH in this presentation from TCT 2022. He explains that while randomized controlled trials (RCTs) are the gold standard form of evidence, they can leave gaps. Dr. Giri describes the pros, cons, and best uses of five types of observational analyses frequently used to fill in these gaps.

Administrative datasets are primarily payer datasets (e.g., Medicare, Aetna, Nationwide Inpatient Sample (NIS)) and their source of data is billing codes. While they provide very large numbers of unselected patients, billing codes don’t represent patient complexity, are not validated, and may be inaccurate. Administrative datasets may be most useful for epidemiology or trend analyses. Dr. Giri provides an example of “an administrative dataset gone wrong” published in JAMA Internal Medicine stating that patients having a myocardial infarction were more likely to survive if physicians did not intervene.

Procedure or disease-specific registries (e.g., National Cardiovascular Data Registry) contain large numbers and relatively unselected patients with specific covariates for a given procedure or disease; however, endpoints can be difficult to adjudicate, all covariates can’t be included, and data access may be restricted. Such datasets can be useful for highlighting current practice patterns and for natural experiments with “quasi-randomization.”

Sub-analyses of RCTs look at subgroups of patients in an RCT or endpoints other than the primary endpoint. They include high quality data collection, often with independent endpoint adjudication; however, they can be limited by the primary experiment hypotheses, selected and smaller patient populations, and access to the data. These datasets may be useful for examining response of subgroups to therapy, non-primary outcomes, and differential outcomes.

DIY (do-it-yourself) datasets offer the ability to dive deeper into data, but often with small numbers of patients and limited available information. These datasets may be useful for analyzing new and emerging technique as well as previously unrecognized covariates or outcomes.

Meta-analyses (study level or patient level) can provide systematic reviews of trials and improve point estimates and confidence intervals. The major con, however, is heterogeneity among inclusion and exclusion criteria and outcomes resulting in biases from trying to compare dissimilar trials. They can be useful for increasing power for important questions or understanding the impact of a new trial on existing literature.

Dr. Giri concludes with the example of an analysis by Dhruva et al. comparing IABP to Impella®, explaining how it violated the intention to treat principle and contained issues with confounding. “We run into these problems when we try to apply these registry analyses and these database analyses to the work we’re doing very frequently,” he explains, because datasets typically are not designed to perform such comparisons. “The key thing we have to be careful of when we examine all of these studies is simply that the quality of the journal doesn’t always determine what the quality of the research is.” 

View more resources

NPS-3361