This section offers a sample of popular IS models, frameworks, and theories that will be useful to a broad range of population health researchers and implementers. They can inform study designs and construct selection. For models, frameworks, and theories of deep interest, studies can also be designed to test out, expand and refine them.
For each of the common theories described, content entails: a brief overview and key constructs and principles; articles illustrating their application within an IS for population health lens; and links to additional resources to support further training and applications.
Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) is an implementation framework that was first published in 1999 to assess the effective adoption of health-related interventions in organizational settings. RE-AIM focuses on essential program elements that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions across diverse settings. RE-AIM gathers measures that are pertinent at individual (both ‘patient’ and ‘provider’) and setting or organizational levels.
Core measures of RE-AIM:
- Reach: the absolute number, proportion, and representativeness of individuals who are willing to participate in a given initiative, intervention, or program.
- Effectiveness: The impact of an intervention on important outcomes, including potential negative effects, quality of life, and economic outcomes.
- Adoption: The absolute number, proportion, and representativeness of settings and intervention agents (people who deliver the program) who are willing to initiate a program.
- Implementation: At the setting level, implementation refers to the intervention agents’ fidelity to the various elements of an intervention’s protocol, including consistency of delivery as intended and the time and cost of the intervention. At the individual level, implementation refers to clients’ use of the intervention strategies.
- Maintenance: The extent to which a program or policy becomes institutionalized or part of the routine organizational practices and policies. Within the RE-AIM framework, maintenance applies at individual and organizational levels.
Each core measure produces a numeric result, reflecting the degree of effective implementation. For example, Reach will measure the percentage of individuals receiving a service out of total eligible individuals; the closer to 100%, the more effective the implementation. Comparing measures is also important to ensure denominators are accurate and interventions are implemented effectively. For example, Adoption measures the uptake of interventions by service providers. In a study where Reach is high but only a few providers of total eligible providers participate, then the true Reach may be much lower than previously thought. Measures are defined based on the specific programmatic context, and should be reviewed with implementation partners for accuracy and appropriateness.
RE-AIM is one of the best-suited pre-existing IS frameworks for public health implementation research because it can be used to measure the effectiveness of the intervention and the implementation process across diverse settings using pre-defined measures. RE-AIM can be used across a range of settings. At the start of a project, RE-AIM should be used to establish core measures to be used consistently across all implementation settings. It can also be used retrospectively to understand the context of intervention outcomes. As a multi-level framework, RE-AIM an also combine different process and outcomes measures, such as staff adoption of an intervention alongside the uptake of intervention components by the target group(s).
RE-AIM is less well-suited to explore the ‘why and how’ questions of implementation, particularly the context-related factors that influence the measured outcomes. It also fails to take account of changes occurring during implementation, or the dynamic interactions between program elements. To study these dimensions, it might be useful to combine RE-AIM with other IS frameworks and models (see guidance on combining frameworks).
Studies using RE-AIM in population health projects:
- Mugwanya, KK et al. 2018. Scale up of PrEP integrated in public health HIV care clinics: a protocol for a stepped-wedge cluster-randomized rollout in Kenya. Implementation Science. 13:118. doi: 10.1186/s13012-018-0809-7.
Partners Scale Up Project, a PrEP intervention being implemented in 24 comprehensive care clinics in Kenya, uses RE-AIM as part of the evaluation of its implementation effectiveness.
- Gaglio, B, Shoup, JA, Glasgow, RE. 2013. The RE-AIM Framework: A systematic review of use over time. Am J Pub Health. 103(6): e38-e46. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3698732/
Systematic review of RE-AIM Applications between 1999-2010 by key constructs. Out of 71 articles, most frequent applications were on physical activity, obesity, and disease management.
- Harden, S. et al. 2018. RE-AIM in Clinical, Community, and Corporate Settings: Perspectives, Strategies, and Recommendations to Enhance Public Health Impact. Public Health, 22 March 2018 | https://doi.org/10.3389/fpubh.2018.00071
Perspective article describing RE-AIM and sharing lessons learned from implementing it in different settings and across diverse health topics.
CFIR was developed and first promoted in 2009 by implementation researchers affiliated with Veterans Affairs Diabetes Quality Enhancement Research Initiative. As the name suggests, CFIR constructs are derived from an extensive literature review of implementation theories, frameworks, and models, and their applications, to provide a consolidated and structured method of examining the most common range of multi-level context factors and processes influencing implementation. Multi-level factors explored in CFIR include the intervention’s internal features and qualities, the implementation setting including the individuals involved in implementation, and the process of implementation, as well as the external or broader system-level or structural factors that may enable or stymie the implementation process. CFIR can be applied at the formative or pre-implementation stage to assess and enhance the likelihood of effective implementation, or during or after implementation to determine the extent of implementation, and factors shaping intervention or implementation effectiveness outcomes (e.g., see RE-AIM measures).
CFIR consists of 5 large domains that further break down into 39 constructs. *Domains and constructs are as follows:
- Intervention Characteristics: Intervention source, Evidence strength & quality, Relative advantage, Adaptability, Trialability, Complexity, Design quality & packaging, Cost
- Outer Setting: Patient needs & resources, Cosmopolitanism, Peer pressure, External policy & incentives
- Inner Setting: Structural characteristics, Networks & communications, Culture, Implementation climate, Tension for change, Compatibility, Relative priority, Organizational incentives & rewards, Goals and feedback, Learning climate, Readiness for implementation, Leadership engagement, Available resources, Access to knowledge & information
- Characteristics of Individuals: Knowledge and beliefs about the intervention, Self-efficacy, Individual stage of change, Individual identification with organization, Other personal attributes
- Process: Planning, Engaging, Opinion leaders, Formally appointed internal implementation leaders, Champions, External change agents, Executing, Reflecting & evaluating
*Click here for construct definitions.
Given the extensiveness of CFIR, it may not be feasible, or even practical, to incorporate every construct into a study design. A study or evaluation may therefore focus on a subset, given what is known about implementation sites and which constructs may be most relevant to use. For example, under Outer Setting, the ‘Cosmopolitanism’ construct is defined as “the degree to which an organization is networked with other external organizations.” This may not be relevant if the organizations implementing an intervention are so spread out that their affiliated networks are not an important piece of information to collect.
Construct selection may be conducted through a group feedback process or a survey with those knowledgeable about the implementation environment. Constructs may also be chosen based on which ones are most relevant for use in combination with any additional implementation science or behavioral theories or models being applied to the intervention or the implementation process. For example, for studies using RE-AIM and CFIR, the selection of CFIR constructs should correspond to and provider greater depth of understanding around RE-AIM domains.
CFIR constructs can explain which, why and how different factors influence implementation using both quantitative and qualitative analysis methods. Scoring can be applied to survey data and even qualitative interview or focus group data to determine which factors have the greatest influence, as well as the type of influence (positive or negative). In-depth qualitative analysis of interview and focus group data can explore the meaning behind different factors, and their interactions. If there is diversity of individuals involved in implementation (e.g., different clinical roles), or in implementing organizations (e.g., hospitals and community-based organizations), then data can be stratified by subgroups to look for differences in experiences, actions, or perspectives which may help to explain different implementation outcomes.
Studies using CFIR in population health projects
- Damschroder, LJ, Lowery CJ. Evaluation of a large-scale weight management program using the consolidated framework for implementation research. 2013 Implementation Science. 10;8:51.
Implementation study of a large VA health program, which shows how to identify and score CFIR constructs to understand variation in implementation across facilities.
- Gimbel, S et al. Evaluation of a systems analysis and improvement approach to optimize prevention of mother-to-child transmission of HIV using the consolidated framework for implementation research. 2016. Journal of Acquired Deficiency Syndromes. 2016. 72;S108-S116.
Implementation study of a package of systems engineering tools rolled out in three African countries around PMTCT. Use of interviews and focus groups guided by CFIR constructs to explain implementation variation across the participating facilities.
- Website devoted to CFIR.
The Practical, Robust Implementation and Sustainability Model (PRISM) is a composite multi-level ecological model, put forward in 2008, to be applied at the preparation, establishment, and summative or maintenance phases of program implementation. Drawing from numerous pre-existing models (Diffusion of Innovations, PRECEDE-PROCEDE, PARiHS, Quality Improvement, Chronic Care Model, and RE-AIM), this prescriptive model assesses whether and how a blueprint of elements is in place to achieve successful implementation. There are four central domains (listed below), each with a series of elements designed to produce successful outcomes. In this model, successful implementation outcomes are measured using the RE-AIM constructs of adoption, effectiveness (reach and implementation), and maintenance. Click here for a visualization of the model.
PRISM Program Elements
There are four domains, with the following associated elements:
- Intervention: From the perspective of the organization (for leaders, managers, and staff) and the patient.
In the organization
- Strength of the evidence base
- Addresses barriers relating to frontline staff
- Coordination across departments and specialties
- Burden (complexity and cost)
- Usability and adaptability
- Trialability and reversibility
- Ability to observe results
- Patient centeredness
- Provides patient choices
- Addresses patient barriers
- Seamlessness of transition between program elements
- Services and access
- Burden (complexity and cost)
- Feedback of results
Relevant organizational characteristics
- Organizational health and culture
- Management support and communication
- Shared goals and cooperation
- Clinical leadership
- Systems and training
- Data and decision support
- Staffing and incentives
- Expectation of sustainability
- Disease burden
- Competing demands
- Knowledge and beliefs
- Payor satisfaction
- Regulatory environment
- Community resources
- Performance data
- Dedicated team
- Adopter training and support
- Relationship and communication with adopters (bridge researchers)
- Adaptable protocols and procedures
- Facilitation of sharing of best practices
- Plan for sustainability
Measures of success include RE-AIM constructs, as a result of the above elements being in place, or appropriately acknowledged and addressed when needed.
The elements could be used to structure a quantitative survey or checklist to ensure that they are achieved, or they could be used to structure a qualitative interview topic guide to elicit substantial feedback about how they operate. The elements can be queried during the implementation planning stage, or during early or late implementation to prepare for sustainability.
A benefit of this model is that it naturally incorporates RE-AIM, so for researchers who want to use this outcomes framework, it is straightforward to incorporate PRISM. Because it is eclectic, a number of implementation traditions are also combined in PRISM. The model also recognizes the importance of interpreting why the results are or are not achieved from the perspectives of those interacting with interventions. Further, the model pursues inputs from multiple organizational actors (leaders, managers, and staff) and patients, with an understanding that not everyone will have the same experience or perspective within the system. A limit of PRISM is the degree to which it pre-determines the most important elements and assumes that these elements will result in specific outcome types. It is possible that other elements will affect implementation under particular circumstances, and that implementation achievements may need to be measured using other outcomes.
Further, the model appears to be relational and phasic, with arrows used to show the conditions needed to achieve subsequent outcomes. However, the model itself does not guide the user to examine or measure these relationships, or relational dependencies. If this is of interest, the researcher will have to develop additional analytic techniques to study relationships or sequencing patterns between essential implementation elements.
Studies using PRISM in population health projects
- Leonard, C et al. Implementation and dissemination of a transition of care program for rural veterans: a controlled before and after study. 2017. Implementation Science. 12:123.
Links to additional resources
- Feldstein and Glasgow. Joint Commission Journal on Quality and Safety
PRECEDE-PROCEED stands for Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation—Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development.
In the tradition of needing to diagnose a problem before treating it, PRECEDE-PROCEED is an eight or nine sequential ‘road map’ model to foster large-scale community health interventions and other changes. It consists of a comprehensive needs assessment, the development of interventions based on theories of problems or changes needed based on the assessment, a period of intervention implementation, and an outcomes evaluation. The model was developed in two parts: PRECEDE between 1968-1974, followed by PROCEDE in the 1980s with its launch in 1991. It was revised and streamlined in 2005. The model uses an ecological framework and draws from health behavior theory.
The model is shaped as a horseshoe, comprised of mirrored assessment and evaluation components (see Figure 1 of this article). Here are the following phases, and central constructs:
Phases 1-5: ‘Preceding’ Assessments of the Problem
- Phase 1 – Social assessment: Engage the community to establish big picture ‘quality of life’ issues related to the problem.
- Phase 2 & 3 (sometimes combined) – Epidemiological and behavior assessment: Establish health-related patterns and behavioral issues attached to environmental (including social and physical factors), biological, and widespread behavioral causes/determinants.
- Phase 4 – Ecological and educational assessment: Assess perceptions of individuals related to making changes, particularly their knowledge, attitudes, beliefs, preferences, skills and degree of perceived self-efficacy. Using constructs found in health promotion theories such as the Health Belief Model, Social Cognitive Theory, and Stages of Change, this model guides researchers to assess predisposing factors (beliefs at outset), reinforcing factors (available rewards when new behavior performed; e.g. community standing), and enabling factors (helping or hindering ability to make behavioral changes; e.g., availability and accessibility of resources). These are factors that may be modifiable through the intervention.
- Phase 5 – Administrative and policy assessment: Assess environmental feasibility of developing programs given policy, programming, organizational context, and funding climate.
Phases 6-9: ‘Proceeding’ Implementation and Evaluation
- Phase 6 – Implementation: Initiation of intervention or programs
- Phase 7 – Process evaluation: Each component is structured by the assessment frameworks. Phase 7 measures changes in predisposing, reinforcing, and enabling factors.
- Phase 8 – Impact evaluation: Measures changes in broader health determinants such as the environment
- Phase 9 – Outcome evaluation: Evaluates changes in ‘Quality of Life’ variables identified as the most important to the community.
Data sources to be incorporated into this approach are varied, and consist of (though are not limited to): community surveys, focus groups, interviews, questionnaires, surveillance data, administrative data, and broader literature reviews.
A strength of the model is that it is multilevel in its diagnosis of problems, and in what it aims to achieve through intervention programming. It is also community-based, beginning with community engagement, thereby ensuring that the problems and solutions are relevant for those affected by the issues. It is able to embed different types and levels of theory, and is presented in a contained and tidy formulation.
One limitation of PRECEDE-PROCEED is that it relies upon varying kinds of data, which some programs may not have access to, time, or resources to pull together. Compared to some of the other models that focus on specific issues related to implementation design, this model acknowledges that interventions need to fit within a context, but it does not focus on how to do this, or how to embed programmatic details given individual, organizational or broader system-level factors. In addition, the circularity of this model’s logic is helpful, but may also limit researchers’ ability to capture those less-planned for implementation effects or outcomes if they do not mirror the assessment domains.
Studies using PRECEDE-PROCEED in population health projects
- Calano, B et al. Effectiveness of a community-based health programme on the blood pressure control, adherence and knowledge of adults with hypertension: A PRECEDE-PROCEED model approach. J Clinical Nursing. 2019. doi: 10.1111/jocn.14787.
- Darrow, W et al. Eliminating disparities in hiv disease: community mobilization to prevent HIV transmission among black and Hispanic young adults in Broward County, Florida. 2004. Ethnicity & Disease. 14.
- The Community Toolbox.
Description of Precede-Proceed
- Gielen, Andrea, McDonald, E, Gary, Tiffany, Bone, L. 2008. Using the Precede-Proceed Model to Apply Health Behavior Theories in Health Behavior and Health Education, 4th Ed (Glanz, Rimer, and Viswanath, eds). Jossey-Bass. Chapter 18.
Background summary textbook article.
The Promoting Action on Research Implementation in Health Services (PARiHS) framework, first published in 1998 by researchers at the Royal College of Nursing Institute in the United Kingdom, is a conceptual framework that models the successful implementation of new practices, programs, and interventions as a function of evidence-based content, context of implementation, and facilitation of changes. PARiHS argues that the context in which evidence-based practices are introduced has as much to do with implementation as the quality of the practice. This framework is multi-level, recognizing that barriers to and opportunities for implementation reside with the clinic’s providers and patients, the organizational culture and ways of doing things, and its leadership, which combine to influence the uptake and use of new clinical practices.
PARiHS offers the formula: SI = (ƒ) (E, C, F)
SI=Successful Implementation, E=Evidence C=Context, F=Facilitation
PARiHS was established inferentially through the ‘knowledge and wisdom’ (Rycroft-Malone, 2004) its architects gained from working on numerous implementation studies, and generating critical elements needed for implementation. It was subsequently refined, clarified and tested, and most recently has undergone an instrument-building phase.
- Evidence: Assess the nature and strength of the evidence and its potential for implementation. There are four bases of evidence:
- Research: derived from systematic studies, but often acquired under controlled circumstances so adaptations are required
- Practitioner knowledge: tacit ‘know how’
- Community and intended population
- Local knowledge: conditions, ways of life or ways of doing things
- prevailing culture
- approach to evaluation
- relevance of innovation to organization
- intervention fit to organizational structures and processes
- adequate and appropriately allocated resources
- multi-disciplinary focus
- Facilitator – personal, role, style, characteristics
- Functions – helping or doing, to enabling and empowering
PARiHS’s assumption is that implementation success is not only a function of the three above domains, but each should rate ‘high’ in its completion by achieving the elements laid out within each domain. High ratings are more conducive to successful implementation. A summary table found here guides the development of high-low criteria that can be applied to each element.
Revised PARiHS Model: i-PARiHS (2015)
While there was wide-scale uptake of PARiHS in the implementation science literature, some critiques (see Helfrich et al. 2010) led to a revised conceptual framework in 2015 called Integrated-PARiHS (I-PARiHS).
The revised formula is: SI = (ƒ) ((FacN⋅i) + R + C)
SI=Successful Implementation, Facn=Facilitation, i=Innovation, R=Recipient (individual and collective), C=Context
The major changes in i-PARiHS are: 1) a broader focus on innovation, and diverse sources of evidence to drive practice; 2) an explicit listing of individuals and collectives who are affected by implementation; 3) a wider view of context (micro, meso, macro levels), and 4) bolstering facilitation as a critical ingredient, upon which the other elements depend, to “activate implementation by assessing and responding to characteristics of the innovation and the recipients of the innovation within the contextual setting” (drawn from webinar series).
Emphasis on facilitation also puts forward in the revised model the competencies and skills a facilitator should possess, a training strategy for them, and how this should play out in practice. This is depicted in the i-PARiHS spiral (see National Cancer Institute ‘Fireside chat’ webinar slides).
Studies using PARiHS in population health projects
- Laycock, A et al. 2018. Application of the i-PARiHS framework for enhancing understanding of interactive dissemination to achieve wide-scale improvement in indigenous primary care. Health Research Policy and Systems. 16:117.
- Harvey, G and Kitson A. 2016. PARiHS Revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice.
Implementation Science. 11:33.
Helpful visual diagrams of framework, and an implementation framework checklist.
- Helfrich, CD et al. 2010. A critical synthesis of literature on the promoting action on research implementation in health services (PARiHS) framework. Implementation Science. 5:82.
- Kitson, A., Harvey, G., & McCormack, B. (1998). Enabling the implementation of evidence based practice: a conceptual framework. Quality in Health Care, 7, 149-158.
- Kitson A, Harvey G. Facilitating an evidence-based innovation into practice: the novice facilitator’s role. In: Harvey G, Kitson A, editors. Implementing evidence-based practice in healthcare: a facilitation guide. Abingdon, Oxon: Routledge; 2015. p. 85–104.
- National Cancer Institute Implementation Science ‘Fireside Chat’ Webinar series: Use of theory in implementation research; pragmatic application and scientific advancement of the PARiHS framework. June 12, 2015, presented by Alison Kitson and Gilian Harvey.
Diffusion of Innovations (DOI) is an explanatory theory that gained prominence when Everett Rogers published Diffusion of Innovations in 1962, a study of how and why organizations adopt new practices. DOI is the most widely cited theory with reports indicating it has been cited over 50,000 times, both within IS and in other related literatures applied across a number of topics.
Rogers defines diffusion as “the process through which an innovation, defined as an idea perceived as new, spreads via certain communication channels over time among the members of a social system” (cited in Beidas: 11). DOI describes typical patterns of adoption at the population level by individuals and across populations, as well as several different types of factors influencing adoption. Applications of this theory can be used to trial adoption strategies to see if they can improve or expedite the spread, or diffusion, of practices known to work.
Diffusion of Innovations Constructs The following ideas guide this theory:
Stages of Adoption The first construct of this model describes the stages individuals and organizations undergo in adopting new practices:
- Awareness: gaining an awareness of an innovation
- Persuasion: Forming an opinion of an innovation
- Decision: Early stages of ‘trial’ on road to decide whether or not to adopt innovation
- Implementation: putting the innovation to regular use (behavior change has occurred)
- Confirmation or sustainability: Reinforcing decisions already made
Rogers also theorized that implementation stages spread on an S-curve across the population over time based on grouped characteristics of adopters:
- Innovators: champions, risk-takers, free from commitments and holding broad world views as first wave of adopters, 2.5% of the population
- Early Adopters: leaders and influencers, second wave of adoption at 13.5% of the population
- Early Majority: bridgers and spreaders, third wave of adoption at 34% of the population
- Late Majority: influenced by peer pressure, waiting for problems to be ironed out, fourth wave of adoption at 34% of the population
- Laggards: traditional, conservative, resistant and cautious, fifth wave of adoption at 16% of the population
In addition to adopter processes and characteristic types, Diffusion of Innovation also identifies several additional factors influencing adoption:
- Costs: perceived cost of adopting practice
- Effectiveness: extent to which the new is better than what came before
- Simplicity: degree to which intervention is easy to understand
- Compatibility: ‘fit’ with audience; degree of intervention adaptability
- Observability: extent to which an outcome can be measured as working
- Trialability: tried before committing
Environment Broader structures of the system, networks and opinions, perceptions of adopters and pressures to adopt
External Change Agents Individuals who are in the position to influence those adopting the innovation by showing the benefits, or by revealing what is undesirable which may slow adoption processes
Facilitation Favorable communication and other tactics between change agents and adopters, which may speed adoption
The DOI model usefully lays out several levels of constructs to apply to research studies, and attends to wider environmental and facilitation dynamics and their role in spread. However, there are some concerns related to DOI. The first is the pro-innovation bias that may demean the non-adopter (the laggard) rather than understand why he or she chooses not to adopt a new practice, which may be for a plausible and appropriate reason. DOI is also limited by minimizing the focus on non- or partial adoption, or different patterns of adoption across the population, each of which may be more accurate than an S-curve. Finally, as other theories show within the same organization there may be adopters and non-adopters. Recognizing this diversity and its implications may be as important as characterizing organizations into the more straightforward typology used in DOI. That said, DOI can serve as a helpful starting point, from which the model can be adapted to the social system.
Studies using Diffusion of Innovation in population health projects
- Bertrand, Jane. Diffusion of Innovations and HIV/AIDS. Journal of Health Communication. 2004. 9:113-121. doi: 10.1080/10810730490271575.
Article reviewing several early HIV prevention campaigns in the US and internationally, using DOI to underpin the intervention strategies.
- Owen, N. Glanz, K., Sallis, J., Kelder, S. Evidence-based approaches to dissemination and diffusion of physical activity interventions. American J of Prev Med. 2006.
Review of several physical activity-based health-related implementation science interventions using diffusion of innovation constructs.