This section is comprised of research designs and methods that are useful to a broad range of population health and implementation science research. For each of the design features and methods described, content includes: a brief description; a selection of articles illustrating its use, and links to additional resources that may be helpful to project building.
Effectiveness-Implementation Hybrid
Hybrid designs became formalized as part of the Veteran’s Affairs Quality Enhancement Research Initiative (QUERI) working group to conduct more effective research across the ‘efficacy-effectiveness-implementation’ continuum. The hybrid design offers a dual focus to assess intervention effectiveness and implementation outcomes at the same time. The advantages of a hybrid design are the speed of translation, and rigorously assessing relationships between efficacy and implementation.
There are three hybrid design types:
- Type 1 primarily tests the effectiveness of interventions, but while also observing and gathering information on implementation outcomes;
- Type 2 simultaneously tests the impact of strategies on both effectiveness and implementation outcomes;
- Type 3 primarily tests the impact of strategies on implementation outcomes, while observing and gathering information on the intervention’s impact on health outcomes. The particular combination of theories and methods vary in a hybrid design. Randomized Control Trials, Stepped-Wedge or other time-delay methods, and pragmatic trials could be used to test intervention effectiveness. Mixed methods techniques could be used to assess the adoption and use of specific implementation strategies leading to the intervention’s effects.
Curran et al have developed a table showing the ways design characteristics vary between clinical effectiveness and implementation trials (reproduced from Curran et al, 2012):
Design Characteristic | Clinical Effectiveness Trial | Implementation Trial |
---|---|---|
Test | ‘Clinical’ intervention | Implementation intervention or strategy |
Typical unit of randomization | Patient, clinical unit | Provider, clinical unit, system |
Typical unit of analysis | Patient | Provider, clinical unit, system |
Summative outcomes | Health outcomes; process/quality measures typically considered intermediate; costs | Adoption/uptake of ‘clinical’ intervention; process measures/quality measures typically considered outcomes |
Type 1 studies are therefore primarily focused on assessing the clinical effectiveness trial design, with a secondary examination of implementation outcomes; Type 2 studies construct co-primary aims across effectiveness and implementation outcomes; and Type 3 studies focus on implementation outcomes, with secondary aims to assess clinical outcomes associated with the implementation trial. Type 1 studies often compare an implementation strategy to usual care or a control group, whereas Types 2 and 3 usually compare two or more implementation strategies to deliver the same intervention.
There may be barriers to conducting hybrid trials. Type 2 hybrid design studies may require the most resources in personnel, expertise, and budget, in order to rigorously achieve the aims of both studies in a highly coordinated fashion. Use of the hybrid design will, in general, likely require a significant degree of explanation to audiences who are primarily familiar with individual-level experimental, efficacy/effectiveness trial designs.
Illustrating Applications
- Curran, GM, et al. 2012. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 50(3): 217-26.
Background on hybrid designs and establishing definitions and applications. Several additional articles mentioned to provide examples of Hybrid studies Types 1, 2 and 3.
- Boekmann, et al. 2018. Protocol for the mixed-methods process and context evaluation of the TB & Tobacco randomised controlled trial in Bangladesh and Pakistan: a hybrid effectiveness-implementation study. BMJ Open. doi: 10.1136/bmjopen-2017-019878.
Protocol laying out details in the design of a hybrid and mixed methods process/outcomes evaluation of a TB and smoking cessation intervention in Bangladesh and Pakistan.
Additional resources
- Let’s Discuss: “Hybrid Designs” Combining Elements of Clinical Effectiveness and Implementation Research. 2015. Hosted by ‘Research to Reality,’ NIH National Cancer Institute. YouTube Recorded Webinar.
Pragmatic Design
A pragmatic trial is one that uses a typical ‘real-world’ treatment choice and setting population to better guide treatment choices in routine clinical practice. Its principles are to maximize generalizability or relevance to a broad array of settings or patient populations, and give a more realistic picture of how treatments are used and their effects, while also maintaining adequate internal validity and rigor. The pragmatic trial comes after a Randomized Controlled Experiment, sometimes called an ‘explanatory trial,’ which establishes efficacy under controlled and ideal circumstances. Once efficacy is demonstrated, the pragmatic trial shows how a treatment can be optimally used when introduced into real-world, or ‘usual’, clinical practices.
The four domains of pragmatic designs to consider are:
- Study population: Population should mirror a real-world population, rather than controlling for complicating variables, to maximize internal validity. This means broadening eligibility and inclusion criteria to diversify the sample.
- Trial setting: Setting should be more ‘real-world’ and less ‘controlled research environment’. This means broadening the types of settings involved in the study. Data collection burden can be minimized by using already collected clinical and administrative data as part of the study.
- Treatment strategy and comparator treatment: Which interventions to include, and their mode of implementation, should mimic real-world practices and be relevant. In other words, the treatments being studied may already be available, but more systematic study may be needed to understand how the treatments work, under what conditions, and for whom, and what implementation strategies are most effective to deliver the intervention.
- Outcome measure: Measured outcomes should be relevant to clinical practice, and to the real-world choices that providers and patients are able to make.
Review additional areas in which your study can maximize real-world clinical practices.
Hybrid designs and pragmatic trials can be combined, as demonstrated in the effectiveness-implementation-hybrid trial targeting posttraumatic stress disorder and comorbidities (2016). As Zatzick et al state with regard to the pragmatic design, the “Gold standards for pragmatic trial design and implementation include broad participant eligibility criteria, flexible intervention delivery, application by the full range of practitioners, and incorporation of rigorous prospective controls, preferably by randomization. Usual practice comparison conditions are frequently used in pragmatic trials.”
There may be operational challenges to consider when implementing pragmatic trials. Real-world clinical settings may struggle with resource limitations when they attempt to implement pragmatic studies, as research design becomes embedded into daily operations. Relatedly, pragmatic trialists should be aware of the potential for study design choices to be in tension with the goal of optimizing relevance and generalizability (see Additional Resources section for websites that further support pragmatic trial implementation).
Illustrating Applications
- McCormack, S et al. Pre-Exposure Prophylaxis to prevent the acquisition of HIV-1infaction (PROUD): effectiveness results from the pilot phase of a pragmatic open-label randomized trial. Lancet. 2016. 387(10013): 53-60. Doi: 10.1016/S0140-6736(15)00056-2.
This study used a pragmatic design to compare two versions of how PrEP would be used in routine clinical practice.
- Zatzick, et al. An effectiveness-implementation-hybrid trial protocol targeting posttraumatic stress disorder and comorbidity. Implementation Science. 2016: 11:58. doi: 10.1186/s13012-016-0424-4
This study uses a pragmatic and effectiveness-implementation hybrid design to examine interventions to treat posttraumatic stress and comorbidities.
Additional resources
- Thorpe et al. A pragmatic-explanatory continuum indicator summary (PRECIS): A tool to help trial designers. 2009; 62(5): 464-475. doi.org/10.1016/j.jclinepi.2008.12.011
Explains the PRECIS-2 Tool, to assist researchers to develop pragmatic trials.
- Precis-2 and Precis-2 Toolkit
Toolkit to help trial designers make studies pragmatic
- ‘PragMagic’: aimed to “maximise generalisability of trial findings to the routine care setting of interest while ensuring validity and operational feasibility of the trial.”
Web-based tool to plan the design of a pragmatic study as well as background on pragmatic design.
- Putting Real-World Healthcare Data to Work: Study Design-Pragmatic Trial.
Additional guidance on employing pragmatic trials.
- Guidance on the Design and Conduct of Trials in Real-World Settings: Factors to consider in pragmatic patient-centered outcomes research
Funding body guidance on pragmatic designs that draws upon PRECIS approaches.
Flexible/Adaptive Designs
Flexible/adaptive designs introduce pre-specified modifications in the design or statistical procedures of an on-going trial to encourage trial flexibility, depending on outcomes observed while accumulating data from subjects in the trial. Pallmann et al. describe adaptive designs as ‘flexible planning, and ‘driving with one’s eyes open.’ An example of an adaptive design would be a trial that has several treatment arms. Midway through the trial, more patients are allocated to the efficacious arms and removed from the less efficacious ones. Adaptive design trials are intended to boost clinical research by cutting down on cost and time factors, while retaining validity and integrity.
Figure 1 of Pallmann et al. demonstrates how adaptive designs depart from conventional RCTs by adding in a review–adapt loop to the linear design–conduct–analysis sequence.
There are several types of adaptive designs, including, but not limited to (adapted from Pallmann et al):
- Adaptive randomization: Shift allocation ratio towards more promising or informative treatment(s)
- Multi-arm/multi-stage: Explore multiple treatments, doses, durations or combinations with options to ‘drop losers’ or ‘select winners’ early
- Group sequential: Include options to stop the trial early for safety, futility or efficacy
- Sample size re-estimation: Adjust sample size to ensure the desired power
- Adaptive dose-finding: Shift allocation ratio towards more promising or informative dose(s)
- Biomarker-adaptive: Incorporate information from or adapt on biomarkers
- Adaptive seamless phase II/III: Shift allocation ratio towards more promising or informative dose(s)
Adaptive designs support implementation science priorities because they adjust to real-world conditions faster than more controlled trials, deploying treatment strategies more quickly to participants while potentially reducing study costs. They can be used formatively or at the confirmatory stage, to determine safe and effective treatment doses and dose ranges. The literature on adaptive designs emphasizes the importance of thorough and detailed preliminary planning to ensure enrollment and changes in treatment arms are adequately developed and powered.
Adaptive design trials are not the same as Adaptive Treatment Strategies (ATS), which aim to improve, by systematic formalizing, the sequencing of treatment regimens to individuals over time, although ATS can be tested using adaptive/flexible trial designs.
Illustrating Applications
- Meurer, W et al. An overview of the adaptive designs accelerating promising trials into treatments (ADAPT-IT) project. Ann Emerg Med. 2012. 60(4): 451-457. doi: 10.1016/j.annemergmed.2012.01.020.
Guidance using adaptive design via the ADAPT-IT Project.
Additional resources
- Bhatt, D and Mehta C. Adaptive Designs for Clinical Trials. NEJM 2016. 375:65-74. DOI: 10.1056/NEJMra1510061
Overview article with study examples of different adaptive design trials.
- Mahajan R, Gupta K. Adaptive design clinical trials: methodology, challenges, and prospect. Indian J Pharmacology 42(4): 201-207. doi: 10.4103/0253-7613.68417
Background article specifying issues related to employing the adaptive trial design.
Mixed Methods
Mixed methods research combines or mixes quantitative and qualitative methodological components in the same study, or series of studies. Quantitative research generates numerical data using methods like clinical trials and surveys/questionnaires. Qualitative approaches tend to generate non-numerical or meaning-oriented data, using methods like semi-structured interviews, in-depth interviews, focus group discussions, and participant observations. Methods are brought together to explore a question or questions in which the answer is better understood by synthesizing, or bringing into relationship, different data sources and analytic processes rather than using one or another method independently.
Mixed methods is not ‘parallel play’ between methods, but thoughtful bringing together of complementary analytic and methodological parts. For example, in a study to examine the knowledge, attitudes and practices of HIV providers with regard to the use of rapid ARV medication initiation with newly diagnosed individuals, a survey method can quantify what is known, felt, and practiced by providers, while in-depth interviews explore why such beliefs are held, their basis, the nature of their influence upon practice, and meaningful relationships between significant elements. Mixed methods can therefore enable greater explanatory power through techniques like triangulation, at times helping to resolve what may appear to be confounding results. Mixed methods studies usually rely upon larger and more diverse study teams than usual, where different methodological experts work together in multi- or trans-disciplinary teams.
Implementation science frameworks and research approaches harness the benefits of mixed methods approaches. For example, hybrid designs use mixed methods to integrate questions answered quantitatively, such as the degree of treatment effectiveness or the relative costs associated with different options, with questions answered qualitatively, such as how or why a result occurred given the way the treatment or intervention was implemented. Real-world implementation processes are influenced by organizational factors and other structural factors and contexts. While these may be measured using quantitative methods, more can be gleaned by also including open-ended or inductive methods, which do not pre-determine the relevant factors or how they operate. This increases the chances of discovering what is important about the particular phenomenon of implementation under study.
Potential qualitative data sources are numerous, and can be selected based on time or expertise. Naturally occurring qualitative data can be harnessed for learning as well (e.g., documents or textual analysis of program materials, websites, meeting notes, administrative documents, correspondence); using these pre-existing sources of data for mixed methods studies is similar to pragmatic designs.
When developing mixed methods studies, researchers will decide if the quantitative and qualitative methods are conducted sequentially (and their order) or concurrently. Qualitative methods may be used during formative or exploratory step to inform quantitative measures, or during an explanatory step when structured quantitative data are being analyzed. Deciding whether and how to order the methods depends on the nature of the implementation science study, and the questions being asked.
Illustrative Applications
- Sheringham et al. 2017. The value of theory in programmes to implement clinical guidelines: Insights from a retrospective mixed-methods evaluation of a programme to increase adherence to national guidelines for chronic disease in primary care. PLoS ONE. 12(3): e0174086. https://doi.org/10.1371/journal.pone.0174086
Additional resources
- Green et al. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities. Administration and Policy in Mental Health and Mental Health Services Research. 2014; 42(5): 508-523.
- Palinkas, L. Mixed Methods in Implementation Science. 2015. NIH National Cancer Institute. YouTube Recorded Webinar.
- Tariq, S and J Woodman. Using mixed methods in health research. JRSM Short Reports. 2013; 4(6). Doi: 10.1177/2042533313479197.
Details on applying mixed methods in health research, and various mixed methods designs.
- Tross, S. Mixed Methods Research: Indications, Approaches, Advantages, & Findings. Digital Seminar through the Columbia University HIV Center. (57 minutes)
Digital training seminar describing the what and how of mixed methods research, including HIV-related case study.
Cost Effectiveness and Economic Analysis
A costing study often refers broadly to an economic evaluation designed to inform resource allocation decisions. There are many types of economic evaluations—including cost-minimization, cost-benefit, and cost-effectiveness studies—that involve comparisons of the economic costs and health effects of multiple interventions. Cost-effectiveness studies, which are commonly used in public health and medicine, are described in more detail below.
While the term ‘costing study’ is often used broadly to describe any type of health-related economic evaluation, it more accurately refers to estimation of the individual-level economic costs required to provide an intervention. A true costing study thus serves as the foundation for all other economic evaluations. To conduct a cost analysis, a cost typology framework is first selected (e.g., fixed/variable, capital/recurrent, financial/economic). Next, various cost components of an intervention are identified and assigned to the cost framework; examples of cost components include intervention-related costs for medication, staff or other personnel, services, supplies, and utilizes. Unit costs are then assigned to each cost component, with the product of the unit cost and individual-level resource calculated. The sum of these costs represents the total costs required to implement the intervention.
What is a cost effectiveness study?
Cost-effectiveness analysis is an analytic approach designed to inform resource allocation decisions for medicine and public health. This type of study involves comparative analysis of the health and economic consequences of two or more alternative interventions. The alternative interventions are compared incrementally against each other, with the ratio of differences in economic costs and in health effects expressed in an incremental cost-effectiveness ratio. The incremental cost-effectiveness ratio represents value for money or the opportunity cost of an alternative use of resources (i.e., the health benefits foregone, or given up, when resources are allocated to the next best alternative).
Cost-effectiveness analysis can be conducted from a variety of perspectives, where relevant costs for patients, payers, and/or society are captured depending on the perspective of interest. An intervention is considered cost-effective and an efficient use of resources if its incremental cost-effectiveness ratio falls below a given societal willingness-to-pay threshold, which varies by country and income-level. The method can be extended to select an optimal portfolio of interventions or programs; that is, when a budget level is known and the cost-effectiveness ratio of all available interventions or programs is available, the ratios can be rank-ordered and interventions or programs adopted until the budget is exhausted. Cost-effectiveness analyses are typically implemented using mathematical modeling approaches, since mathematical models can simulate a lifetime time horizon, capture both morbidity and mortality over this lifespan, and thus promote meaningful comparisons of analysis findings across diseases, conditions, and settings.
How does the concept of cost-effectiveness differ from affordability?
While cost-effectiveness represents the opportunity cost of resources in their next best alternative use, affordability represents the total cost required to implement an intervention for all individuals eligible to receive it. The distinction is important: An intervention may be considered cost-effective in that it meets or falls below a particular societal willingness-to-pay threshold. However, the total cost of implementing the intervention for all those eligible to receive it may exceed the available budget. Therefore, a cost-effective intervention may not be an affordable one. Ultimately, studies involving affordability move beyond costing analysis (which estimates the per-person cost of an intervention) and cost-effectiveness analysis (which estimates value for money) to estimate the total budget required to provide the intervention for a given population.
A study evaluating affordability is typically referred to as a ‘budget impact analysis.’ Notably, budget impact analysis does not capture explicitly the health consequences of an intervention. The approach is conducted from a payer perspective and it typically has a short analytic time horizon (5 years or less). The approach can provide information on the budget impact of replacing one intervention or program with another, the expansion of existing services to include an additional intervention, or the addition of an intervention where none existed before.
Illustrating Applications
- Mozaffarian D, Liu J, Sy S, Huang Y, Rehm C, Lee Y, Wilde P, Abrahams-Gessel S, de Souza Veiga Jardim T, Gaziano T, Micha R. Cost-effectiveness of financial incentives and disincentives for improving food purchases and health through the US Supplemental Nutrition Assistance Program (SNAP): A microsimulation study. PLoS Med 2018;15: e1002661.
- Liu S, Cipriano LE, Holodniy M, Owens DK, Goldhaber-Fiebert JD. New protease inhibitors for the treatment of chronic hepatitis C: a cost-effectiveness analysis. Ann Intern Med 2012;156:279-90.
- Olney JJ, Braitstein P, Eaton JW, Sang E, Nyambura M, Kimaiyo S, McRobie E, Hogan JW, Hallett TB. Evaluating strategies to improve HIV care outcomes in Kenya: a modelling study. Lancet HIV 2016;3:e592-e600.
Additional resources
- Sanders GD, Maciejewski ML, Basu A. Overview of Cost-effectiveness Analysis. JAMA 2019;321:1400-1401.
- Center for Health Economics of Treatment Interventions for Substance Use Disorder, HCV, and HIV (CHERISH)
- Centers for Disease Control and Prevention, Polaris Economic Evaluation
- US Department of Veterans Affairs, Health Economics Resource Center
- WHO-CHOICE (CHOosing Interventions that are Cost-Effective)
Discrete Choice Experiments
Discrete Choice Experiments (DCEs) are used to quantify individuals’ preferences and the trade-offs they make when choosing services and goods, which can drive greater uptake and engagement of evidence-based interventions, a central concern of Implementation Science. The goal is to identify which attributes drive respondent’s decision-making behavior, and to use this knowledge to create more desirable packages of programs, products or services. Increasingly, DCEs are used in healthcare to better understand how to improve services in line with people’s preferences.
While traditional survey administration methods, such as Likert scales, could be used to assess relative preferences for individual attributes or service models, the DCE method better simulates real-world service settings and the trade-offs people make between different features of a product or service. In a DCE, instead of considering features of a product or service in isolation, sets of features are considered as part of various possible packages in comparison with other sets of features.
In DCEs, respondents are shown a series of hypothetical products or services (“choice sets”), and asked to select which ones they prefer. Choices sets are made up of multiple “attributes,” such as color and size, and “attribute levels,” such as red, blue, or yellow, and small, medium, or large. The number of choice sets presented in each task depends in part on the complexity of the product or service. Task complexity increases with the number of attributes and attribute levels, so care should be taken when designing the study. There are DCE design methods that can optimize statistical power and minimize participant burden by randomizing the order in which choice sets are presented to each participant. Randomization can also increase design efficiency by removing the need for every participant to rate every possible choice set.
To determine which options to include in a survey, it is helpful to conduct formative research, consisting of open-ended interviews or focus groups, and/or literature reviews. For example, in a project to determine how primary care services should be delivered, focus group discussions can elicit the most important attributes to include (e.g., wait times) and, what the options, or levels, within each attribute should be (10 minutes, 20 minutes, etc). Other issues to consider when designing DCEs are how many attributes to include in a choice set, the number of choice sets to present at a time, how many tasks in the survey in total, and whether and how to use visuals and text to improve comprehension. There are different analytical models that can be applied, which will inform the sample size, choice set design, and the number of choice sets used.
Click here to try this sample DCE around PrEP.
Illustrating Applications
- Strauss, M et al. HIV testing preferences among long distance truck drivers in Kenya: a discrete choice experiment. AIDS Care. 2018. 30 (1). https://doi.org/10.1080/09540121.2017.1367086
HIV-related example of use of DCEs in global health context.
- Zanoli et al. Understanding preferences for HIV Care and treatment in Zambia: Evidence from a discrete choice experiment among patients who have been lost to follow-up. PLoS Medicine. https://doi.org/10.1371/journal.pmed.1002636
HIV-related example of use of DCEs in global health context.
Additional Resources
- User Guide with Case Studies: How to conduct a discrete choice experiment for health workforce recruitment and retention in remote and rural areas. USAID, CapacityPlus, WHO, World Bank. 2012.
A step-by-step guide to setting up DCE studies.
- Bridges, J. F. P., Hauber, A. B., Marshall, D., Lloyd, A., Prosser, L. A., Regier, D. A., et al. (2011). Conjoint Analysis Applications in Health—a Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Jval, 14(4), 403–413. http://doi.org/10.1016/j.jval.2010.11.013
- Salloum RG, Shenkman EA, Louviere JJ, Chambers DA. Application of discrete choice experiments to enhance stakeholder engagement as a strategy for advancing implementation: a systematic review. Implementation Science. 12(140). 2017.
- Dubov, A., Ogunbajo, A., Altice, F. L., & Fraenkel, L. (2018). Optimizing access to PrEP based on MSM preferences: results of a discrete choice experiment. AIDS Care, 31(5), 545–553. http://doi.org/10.1080/09540121.2018.1557590
- Sawtooth software technical papers.