We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most systematic reviews concentrate on pooling effect estimates from multiple trials from different contexts, as though there were one underlying effect that can be uncovered by pooling. They often fail to examine mechanisms and how these might interact with context to generate different outcomes in different settings and populations. Realist reviews do focus on questions of what works for whom under what conditions but do not use rigorous methods to search for, appraise the quality of and synthesise evidence to answer these questions. We show how systematic reviews can explore more nuanced questions informed by realism while retaining rigour. Using the example of a systematic review of school-based interventions to prevent dating and other gender-based violence, we first examine how systematic reviews can define context–mechanism–outcome configurations. This can occur through synthesis of intervention descriptions, theories of change and process evaluations.
Realist evaluators argue that evaluations need to ask not just what works but also what works for whom under what conditions. They argue interventions need to be evaluated in terms of the mechanisms they trigger and how these interact with context to generate different outcomes in different settings or populations. Hypotheses should be worded as context–mechanism–outcome configurations (CMOCs). Many realist evaluators argue that randomised trials are not a proper scientific design, do not encompass sufficient variation in contexts to test CMOCs and are inappropriately positivist in orientation. They argue that it is better to test CMOCs using observational designs which do not use randomisation. We welcome the focus on CMOCs but disagree with the view that trials cannot be used for realist evaluation. Trials are an appropriate scientific design when it is impossible for experimenters to control all the factors which have an influence on the result of an experiment. Trials can include sufficient variety of contexts to test CMOCs. Trials need not embody a positivist approach to the science of complex health interventions if they are oriented towards testing hypotheses, draw on theory which engages with deeper mechanisms of causation and use distinctly social science approaches such as qualitative research.
Once context–mechanism–outcome configurations (CMOCs) have been refined through qualitative research, they can be tested using quantitative data. A variety of different analyses can be used to assess the validity of CMOCs. Overall, analyses will not assess CMOCs but are nonetheless still useful in determining overall effects. Mediation analyses assess whether any intervention effect on an outcome is explained by intervention effects on intermediate outcomes, and so can shed light on mechanisms. Moderation analyses see how intervention effects vary between subgroups defined in terms of baseline context (settings or populations) and so shed light on contextual differences. Moderated mediation analyses assess whether mediation is apparent in some context but not others, and so can shed light on which mechanisms appear to generate outcomes in which contexts. Qualitative comparative analyses can examine whether more complex combinations of markers of context and mechanism co-occur with markers of outcome. Together, this set of analyses can provide nuanced and rigorous information on which CMOCs appear most usefully to explain how intervention mechanisms interact with context to generate outcomes.
It is important to limit statistical testing of context–mechanism–outcome configurations (CMOCs) to those which are most plausible. This is because testing too many hypotheses will lead to some false positive conclusions. Qualitative research conducted within process evaluations is a useful way to inform refinement of CMOCs before they are tested using quantitative data. Process evaluations aim to examine intervention implementation and the mechanisms that arise from this. They involve a mixture of quantitative (for example, logbooks completed by intervention providers) and qualitative (for example, interviews or focus groups with recipients) research. Qualitative research can be useful in assessing and refining CMOCs because intervention providers and recipients will have insights into how intervention mechanisms might interact with context to generate outcomes. These insights might be explored directly (for example, by asking participants how they think the interventions works) or indirectly (for example, by asking participants about their experiences of an interventions, and the conditions and consequences of this). Sampling for such qualitative research should ensure that a diversity of different participant accounts is explored. Analyses of these accounts can draw on grounded theory approaches which aim to build or refine theory based on qualitative data.
This chapter reflects on how evidence from realist trials and systematic reviews might be of value, not only in drawing conclusions about specific interventions and their theories of change but also in testing and refining the middle range theories which inform these and other interventions. While evaluation evidence should be of most immediate use in informing decisions about the implementation of the specific interventions being evaluated, a broader and more enduring use for evaluation could be in suggesting refinements to middle range theory. Such refinements might then be used to inform and influence the next generation of complex health interventions. In order to be useful in assessing the validity of middle range theory, evaluations will need to assess interventions informed by a limited number of middle range theories comprising a limited number of well-defined constructs. There may be value in conducting proof of principle studies separately from more pragmatic evaluations in order to test and refine middle range theory.
Theories of change propose how intervention resources and activities might lead to the generation of outcomes. They are sometimes presented diagrammatically as logic models. Realist evaluators and others have suggested that interventions should be theorised in terms of how intervention mechanisms interact with context to generate outcomes. Our own trial of the Learning Together whole-school intervention to prevent bullying set out to define, refine and test such theories in the form of context–mechanism–outcome configurations (CMOCs). We drew on several sources to define our starting CMOCs. These included existing middle range theory. This is scientific theory about the general mechanisms (i.e. not necessarily concerning an intervention) that generate outcomes. This should be analytically general enough to apply to a range of settings, populations and/or outcomes, but specific enough to be useful in a given application. We also used previous research and public consultation to inform our CMOCs.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.