Enhancing Reporting of Behavior Change Intervention Evaluations

University of Exeter Medical School, University of Exeter (Abraham), Center for Health, Intervention, and Prevention, University of Connecticut (Johnson), Health Psychology Group, Institute of Applied Health Sciences, University of Aberdeen (de Bruin), University of Social Sciences and Humanities, Warsaw, Poland, and Trauma, Health, & Hazards Center, University of Colorado at Colorado Springs (Luszczynska)
This article critiques evaluations of behaviour change interventions (BCIs) for the prevention of HIV and "advances recommendations made by an international group of scholars constituting the Workgroup for Intervention Development and Evaluation Research (WIDER), which has developed brief guidance to journal editors to improve the reporting of evaluations of behavior change interventions." The article is from the JAIDS: Journal of Acquired Immune Deficiency Syndromes supplement addressing clinicians and public health scientists in the field of HIV prevention and treatment who might value information on health communication. (Footnotes removed by the editor.)
The article addresses four practices contributing to the problem of improving the reporting of evaluations of behavior change interventions.
- Lack of details on design and implementation of the intervention;
- Failure to consider content and support to control groups;
- Paucity of detailed process evaluations showing mechanisms by which interventions generate their effects; and
- Lack of replication in other contexts.
Each of these is considered below:
1) Lack of details on design and implementation of the intervention - Inaccessibility of detailed behaviour change manuals and poor descriptions of the content of BCIs "result in difficulty replicating and explaining heterogeneity in trial results and in failures when translating trial results to community sites." There is a need for meta-analyses to work with the Centers for Disease Control and Prevention (CDC)'s Diffusion of Evidence-Based Interventions Project treatment manuals to identify "content of interventions (and control groups) that generate differences in measures of efficacy and effect size across efficacy trials."
"Guidelines on what should be included in reports of interventions (eg, Consolidated Standards of Reporting Trials [CONSORT] and Transparent Reporting of Evaluations with Nonrandomized Designs [TREND]) have shaped editorial policy so as to enhance and standardize the details made available in recently published trials. Yet, although CONSORT guidance calls for 'precise details of the interventions intended for each group and how and when they were actually administered,' editorial focus is often limited to ensuring detailed descriptions of evaluation methods (eg, trial procedures) rather than of intervention implementation."
2) Failure to consider content and support to control groups - A clear understanding of the type of control group (including size), any active content to which the control was exposed, and of the intervention to which it is compared is necessary for "interpretation, comparison, and generalizability of trial effects."
Often the statement or assumption is that the control group gets "usual care", leaving a need to define care in the reporting. "Making this change in trial reporting practices may also challenge intervention designers to specify more clearly how proposed experimental interventions will improve upon the best available usual care already delivered routinely."
3) Paucity of detailed process evaluations showing mechanisms by which interventions generate their effects - As stated here, there is a need for "closer integration of basic and applied behavioral science.... BCI designers should apply research into change mechanisms to intervention design and should use process evaluations measuring change mechanisms to enable testing of psychological theory in practice."
The six stages of intervention mapping (IM) to construct a logic model for the BCI design process are: 1) needs assessment; 2) primary and secondary intervention objectives; 3) identification of underlying mechanisms that maintain unwanted behaviour patterns and of those that may generate specified changes; 4) identification of evidence-based behaviour change techniques and practical ways of delivering them; 5) implementation planning; and 6) evaluation measures.
If BCI designs specify targeted change processes (IM stage 3), their evaluations (IM stage 6) can include measures of the mechanisms targeted to change behaviour. "For example, if an intervention is intended to change motivation by changing participants' beliefs about what others are doing or about what their peers approve (eg, 'Do other people of your age and gender use condoms and approve of their use?'), then it is critical to know if such beliefs changed relative to a no-intervention control group." This knowledge can be informative in failed interventions, identifying what is ineffective: design and implementation or theory of change.
"In summary, precise specification of change techniques designed to alter identified regulatory mechanisms in logic models of intervention and active control content would allow greater precision in identifying what works, for whom, and by what change processes....Of course, assessment of mechanism is not the only purpose of process evaluations. Another key purpose is identifying the contextual factors that may influence intervention success or failure. Measuring differences in contexts and designing trials that systematically vary contextual factors is critical to developing an understanding of the generalizability of effective BCIs."
4) Lack of replication in other contexts - As stated here, "Replication is critical to the advancement of science, including behavioral science..." and depends upon "full disclosure of BCI design, including logic models, change techniques, and delivery methods employed as well as full details of evaluation processes." For work to be replicable, transferable, and comparable, "a clear protocol or manual to guide implementation is …critical...in intervention reporting..." as well as in implementation.
The conclusions include a link to WIDER Recommendations (PDF format) and the four issues they address:
- "Editors of scientific journals should ensure that BCI evaluations comply fully with the extended CONSORT statements...
- This goal should be supported by clarification of the following: (1) the change processes considered necessary to prompt a change in the specified behavior(s), (2) how the intervention design was informed by theoretical considerations or models of causal or regulatory processes, and (3) what mechanism-based change techniques were included....
- [D]etailed information about materials and implementation... must be included in protocols or manuals describing intervention implementation….
- " WIDER recommended that such manuals also include the details of any services or care provided to control groups...."
The authors conclude that enhancement of accessibility to BCI evaluation research, facilitation of accurate replication, improvement in transferability across contexts and implementation can then increase the impact of such research on individual and public health.
JAIDS: Journal of Acquired Immune Deficiency Syndromes, August 15 2014 - Volume 66 - p. S237-S240, accessed July 22 2014. Image credit: The Health Enhancement Research Organization (HERO)
- Log in to post comments











































