Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-10T07:56:44.822Z Has data issue: false hasContentIssue false

22 - Evaluation of Behavior Change Interventions

from Part II - Methods and Processes of Behavior Change: Intervention Development, Application, and Translation

Published online by Cambridge University Press:  04 July 2020

Martin S. Hagger
Affiliation:
University of California, Merced
Linda D. Cameron
Affiliation:
University of California, Merced
Kyra Hamilton
Affiliation:
Griffith University
Nelli Hankonen
Affiliation:
University of Helsinki
Taru Lintunen
Affiliation:
University of Jyväskylä
Get access

Summary

Rigorous evaluation of interventions is vital to advance the science of behavior change and identify effective interventions. Although randomized controlled trials (RCTs) are often considered the “gold standard”, other designs are also useful. Considerations when choosing intervention design are the research questions, the stage of evaluation, and different evaluation perspectives. Approaches to explore the utility of an intervention, include a focus on (1) efficacy; (2) “real-world” effectiveness; (3) how an intervention works to produce change; or (4) how the intervention interacts with context. Many evaluation designs are available: experimental, quasi-experimental, and nonexperimental. Each has strengths and limitations and choice of design should be driven by the research question. Choosing relevant outcomes is an important step in planning an evaluation. A typical approach is to identify one primary outcome and a narrow range of secondary outcomes. However, focus on one primary outcome means other important changes may be missed. A well-developed program theory helps identify a relevant outcomes. High-quality evaluation requires (1) involvement of relevant stakeholders; (2) evaluating and updating program theory; (3) consideration of the wider context; (4) addressing implementation issues; and (5) appropriate economics input. Addressing these can increase the quality, usefulness, and impact of behavior change interventions.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barnighausen, T., Tugwell, P., Rottingen, J. A. et al. (2017). Quasi-experimental study designs series-paper 4: Uses and value. Journal of Clinical Epidemiology, 89, 2129. https://doi.org/10.1016/j.jclinepi.2017.03.012Google Scholar
Basu, S., Meghani, A., & Siddiqi, A. (2017). Evaluating the health impact of large-scale public policy changes: Classical and novel approaches. Annual Review of Public Health, 38, 351370. https://doi.org/10.1146/annurev-publhealth-031816-044208CrossRefGoogle ScholarPubMed
Bernal, J. L., Cummins, S., & Gasparrini, A. (2017). Interrupted time series regression for the evaluation of public health interventions: A tutorial. International Journal of Epidemiology, 46, 348355. https://doi.org/10.1093/ije/dyw098Google Scholar
Bonell, C., Allen, E., Warren, E. et al. (2018). Effects of the Learning Together intervention on bullying and aggression in English secondary schools (INCLUSIVE): A cluster randomised controlled trial. The Lancet, 392(10163), 24522464. https://doi.org/10.1016/S0140-6736(18)31782-3CrossRefGoogle ScholarPubMed
Bonell, C., Fletcher, A., Morton, M., Lorenc, T., & Moore, L. (2012). Realist randomised controlled trials: A new approach to evaluating complex public health interventions. Social Science and Medicine, 75, 22992306. https://doi.org/10.1016/j.socscimed.2012.08.032Google Scholar
Bonell, C., Jamal, F., Melendez-Torres, G. J., & Cummins, S. (2015). “Dark logic”: Theorising the harmful consequences of public health interventions. Journal of Epidemiology and Community Health, 69, 9598. https://doi.org/10.1136/jech-2014-204671Google Scholar
Chapman, L., Sadeghzadeh, C., Koutlas, M., Zimmer, C., & Marco, M. D. (2019). Evaluation of three behavioral economics “nudges” on grocery and convenience store sales of promoted nutritious foods (OR16-05–19). Public Health Nutrition, 22, 32503260. https://doi.org/10.1017/S136898001900179Google Scholar
Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. Psychological Methods, 14, 202224. https://doi.org/10.1037/a0015826CrossRefGoogle ScholarPubMed
Cook, J. A., Julious, S. A., Sones, W. et al. (2018). DELTA2 guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial. British Medical Journal, 363. https://doi.org/10.1136/bmj.k3750Google Scholar
Craig, P., Cooper, C., Gunnell, D. et al. (2012). Using natural experiments to evaluate population health interventions: New Medical Research Council guidance. Journal of Epidemiology and Community Health, 66, 11821186. https://doi.org/10.1136/jech-2011-200375Google Scholar
Craig, P., Di Ruggiero, E., Frolich, K. L., Mykhalovskiy, E., White, M., & the Canadian Institutes of Health Research (CIHR)–National Institute for Health Research (NIHR) Context Guidance Authors Group. (2018). Taking Account of Context in Population Health Intervention Research: Guidance for Producers, Users and Funders of Research. Southampton: NIHR Journals Library.Google Scholar
Craig, P., Katikireddi, S. V., Leyland, A., & Popham, F. (2017). Natural experiments: An overview of methods, approaches, and contributions to public health intervention research. Annual Review of Public Health, 38, 3956. https://doi.org/10.1146/annurev-publhealth-031816-044327Google Scholar
Davidson, K. W., Peacock, J., Kronish, I. M., & Edmondson, D. (2014). Personalizing behavioral interventions through single-patient (n-of-1) trials. Social and Personality Psychology Compass, 8, 408421. https://doi.org/10.1111/spc3.12121CrossRefGoogle ScholarPubMed
Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science and Medicine, 210, 221. https://doi.org/10.1016/j.socscimed.2017.12.005CrossRefGoogle ScholarPubMed
Desveaux, L., & Shaw, J. (2018). A mobile app to improve self-management of individuals with type 2 diabetes: Qualitative realist evaluation. Journal of Medical Internet Research, 20, e81. https://doi.org/10.2196/jmir.8712Google Scholar
Egan, M., McGill, E., Anderson de Cuevas, R. et al. (2019). NIHR SPHR Guidance on Systems Approaches to Local Public Health Evaluation. Part 1: Introducing Systems Thinking. London: National Institute for Health Research.Google Scholar
Egan, M., McGill, E., Penney, T. et al. (2019). NIHR SPHR Guidance on Systems Approaches to Local Public Health Evaluation. Part 2: What to Consider When Planning a Systems Evaluation. London: National Institute for Health Research.Google Scholar
Fairhurst, K., & Dowrick, C. (1996). Problems with recruitment in a randomized controlled trial of counselling in general practice: Causes and implications. Journal of Health Services Research and Policy, 1, 7780. https://doi.org/10.1177/135581969600100205Google Scholar
Fichera, E., Gray, E., & Sutton, M. (2016). How do individuals’ health behaviours respond to an increase in the supply of health care? Evidence from a natural experiment. Social Science and Medicine, 159, 170179. https://doi.org/10.1016/j.socscimed.2016.05.005Google Scholar
Fok, C. C., Henry, D., & Allen, J. (2015). Research designs for intervention research with small samples II: Stepped wedge and interrupted time-series designs. Prevention Science, 16, 967977. https://doi.org/10.1007/s11121-015-0569-4Google Scholar
Forbes, L. J., Marchand, C., Doran, T., & Peckham, S. (2017). The role of the Quality and Outcomes Framework in the care of long-term conditions: A systematic review. British Journal of General Practice, 67, e775e784. https://doi.org/10.3399/bjgp17X693077CrossRefGoogle ScholarPubMed
Glasgow, R. E., Harden, S. M., Gaglio, B. et al. (2019). RE-AIM planning and evaluation framework: Adapting to new science and practice with a 20-year review. Frontiers in Public Health, 7, 64. https://doi.org/10.3389/fpubh.2019.00064Google Scholar
Hahn, S., Puffer, S., Torgerson, D. J., & Watson, J. (2005). Methodological bias in cluster randomised trials. BMC Medical Research Methodology, 5, 10. https://doi.org/10.1186/1471-2288-5-10Google Scholar
Hallingberg, B., Turley, R., Segrott, J. et al. (2018). Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: A systematic review of guidance. Pilot Feasibility Studies, 4, 104. https://doi.org/10.1186/s40814-018-0290-8Google Scholar
Hemming, K., Eldridge, S., Forbes, G., Weijer, C., & Taljaard, M. (2017). How to design efficient cluster randomised trials. British Medical Journal, 358, j3064. https://doi.org/10.1136/bmj.j3064Google Scholar
Hemming, K., Haines, T. P., Chilton, P. J., Girling, A. J., & Lilford, R. J. (2015). The stepped wedge cluster randomised trial: Rationale, design, analysis, and reporting. British Medical Journal, 350, h391. https://doi.org/10.1136/bmj.h391CrossRefGoogle ScholarPubMed
Hutchings, J., Bywater, T., Daley, D. et al. (2007). Parenting intervention in Sure Start services for children at risk of developing conduct disorder: Pragmatic randomised controlled trial. British Medical Journal, 334(7595), 678. https://doi.org/10.1136/bmj.39126.620799.55Google Scholar
Kairalla, J. A., Coffey, C. S., Thomann, M. A., & Muller, K. E. (2012). Adaptive trial designs: A review of barriers and opportunities. Trials, 13, 145. https://doi.org/10.1186/1745-6215-13-145CrossRefGoogle ScholarPubMed
Kelly, P. J., Baker, A. L., Deane, F. P. et al. (2015). Study protocol: A stepped wedge cluster randomised controlled trial of a healthy lifestyle intervention for people attending residential substance abuse treatment. BMC Public Health, 15, 465. https://doi.org/10.1186/s12889-015-1729-yGoogle Scholar
Kontopantelis, E., Doran, T., Springate, D. A., Buchan, I., & Reeves, D. (2015). Regression based quasi-experimental approach when randomisation is not an option: Interrupted time series analysis. British Medical Journal, 350, h2750. https://doi.org/10.1136/bmj.h2750Google Scholar
Kraska, M. (2010). Repeated measures design. In Salkind, N. J. (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: SAGE.Google Scholar
Kwasnicka, D., Inauen, J., Nieuwenboom, W. et al. (2019). Challenges and solutions for N-of-1 design studies in health psychology. Health Psychology Review, 13, 163178. https://doi.org/10.1080/17437199.2018.1564627Google Scholar
Leviton, L. C., & Melichar, L. (2016). Balancing stakeholder needs in the evaluation of healthcare quality improvement. BMJ Quality and Safety, 25, 803807. https://doi.org/10.1136/bmjqs-2015-004814Google Scholar
MacMillan, F., George, E. S., Feng, X. et al. (2018). Do natural experiments of changes in neighborhood built environment impact physical activity and diet? A systematic review. International Journal of Environmental Research and Public Health, 15, 217. https://doi.org/10.3390/ijerph15020217CrossRefGoogle ScholarPubMed
Matthews, L., Pugmire, J., Moore, L. et al. (2017). Study protocol for the “HelpMeDoIt!” randomised controlled feasibility trial: An app, web and social support-based weight loss intervention for adults with obesity. BMJ Open, 7, e017159. https://doi.org/10.1136/bmjopen-2017-017159Google Scholar
McDonald, S., Quinn, F., Vieira, R. et al. (2017). The state of the art and future opportunities for using longitudinal n-of-1 methods in health behaviour research: A systematic literature overview. Health Psychology Review, 11, 307323. https://doi.org/10.1080/17437199.2017.1316672Google Scholar
Moore, G. F., Audrey, S., Barker, M. et al. (2015). Process evaluation of complex interventions: Medical Research Council guidance. British Medical Journal, 350(h1258), h1258. https://doi.org/10.1136/bmj.h1258CrossRefGoogle ScholarPubMed
Moore, L., Hallingberg, B., Wight, D. et al. (2018). Exploratory studies to inform full-scale evaluations of complex public health interventions: The need for guidance. Journal of Epidemiology and Community Health, 72, 865866. https://doi.org/10.1136/jech-2017-210414Google Scholar
Murad, M. H., Asi, N., Alsawas, M., & Alahdab, F. (2016). New evidence pyramid. BMJ Evidence-Based Medicine, 21, 125127. https://doi.org/10.1136/ebmed-2016-110401Google Scholar
O’Cathain, A., Croot, L., Duncan, E. et al. (2019). Guidance on how to develop complex interventions to improve health and healthcare. BMJ Open, 9, e029954. https://doi.org/10.1136/bmjopen-2019-029954Google Scholar
O’Cathain, A., Murphy, E., & Nicholl, J. (2010). Three techniques for integrating data in mixed methods studies. British Medical Journal, 341, c4587. https://doi.org/10.1136/bmj.c4587Google Scholar
O’Keeffe, A. G., Geneletti, S., Baio, G., Sharples, L. D., Nazareth, I., & Petersen, I. (2014). Regression discontinuity designs: An approach to the evaluation of treatment efficacy in primary care using observational data. British Medical Journal, 349, g5293. https://doi.org/10.1136/bmj.g5293Google Scholar
Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46, 598. https://doi.org/10.1037/0022-3514.46.3.598Google Scholar
Prins, R. G., Panter, J., Heinen, E., Griffin, S. J., & Ogilvie, D. B. (2016). Causal pathways linking environmental change with health behaviour change: Natural experimental study of new transport infrastructure and cycling to work. Preventive Medicine, 87, 175182. https://doi.org/10.1016/j.ypmed.2016.02.042CrossRefGoogle ScholarPubMed
Public Health England. (2018). Evaluation in Health and Wellbeing. London: Public Health England.Google Scholar
Salkind, N. J. (2010). Encyclopedia of Research Design. Thousand Oaks, CA: SAGE.Google Scholar
Sanson-Fisher, R. W., D’Este, C. A., Carey, M. L., Noble, N., & Paul, C. L. (2014). Evaluation of systems-oriented public health interventions: Alternative research designs. Annual Review of Public Health, 35, 927. https://doi.org/10.1146/annurev-publhealth-032013-182445Google Scholar
Schueller, S. M., Leykin, Y., Pérez-Stable, E. J., & Muñoz, R. F. (2013). Selection of intervention components in an internet stop smoking participant preference trial: Beyond randomized controlled trials. Psychiatry Research, 205, 159164. https://doi.org/10.1016/j.psychres.2012.08.030Google Scholar
Simpson, S. A., Matthews, L., Pugmire, J. et al. (2019). An app, web and social support based weight loss intervention for adults with obesity: The “HelpMeDoIt!” feasibility RCT. Public Health Research.Google Scholar
Skivington, K., Matthews, L., Craig, P., Simpson, S., & Moore, L. (2018). Developing and evaluating complex interventions: Updating Medical Research Council guidance to take account of new methodological and theoretical approaches. The Lancet, 392, S2. Meeting abstract: Public Health Science 2018, Belfast, Northern Ireland, November 23, 2018. http://dx.doi.org/10.1016/s0140-6736(18)32865-4CrossRefGoogle Scholar
Torgerson, C., & Torgerson, D. J. (2008). Designing Randomised Trials in Health, Education and the Social Sciences: An Introduction. London: Palgrave Macmillan.Google Scholar
Torgerson, D. J., & Sibbald, B. (1998). Understanding controlled trials: What is a patient preference trial? British Medical Journal, 316, 360. https://doi.org/10.1136/bmj.316.7128.360Google Scholar
Walton, H., Spector, A., Tombor, I., & Michie, S. (2017). Measures of fidelity of delivery of, and engagement with, complex, face-to-face health behaviour change interventions: A systematic review of measure quality. British Journal of Health Psychology, 22, 872903. https://doi.org/10.1111/bjhp.12260Google Scholar
Yang, Y., & Diez-Roux, A. V. (2013). Using an agent-based model to simulate children’s active travel to school. International Journal of Behavioral Nutrition and Physical Activity, 10, 67. https://doi.org/10.1186/1479-5868-10-67Google Scholar
Yoon, S., Schwartz, J. E., Burg, M. M. et al. (2018). Using behavioral analytics to increase exercise: A randomized n-of-1 study. American Journal of Preventive Medicine, 54, 559567.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×