Book contents
- Frontmatter
- Contents
- List of figures
- List of examples
- Acknowledgements
- Preface
- Glossary of selected evaluation terms
- 1 Introduction
- 2 Compilation: setting the right foundations
- 3 Composition: designing for needs
- 4 Conducting process evaluation
- 5 Conducting economic evaluation
- 6 Conducting impact evaluation
- 7 Analysis, reporting and communications
- 8 Emerging challenges for evaluation and evaluators
- References
- Annex A The ROTUR framework for managing evaluation expectations
- Annex B Ready reckoner guide to experimentation choices in impact evaluation
- Index
- Social Research Association Shorts
Preface
Published online by Cambridge University Press: 05 April 2022
- Frontmatter
- Contents
- List of figures
- List of examples
- Acknowledgements
- Preface
- Glossary of selected evaluation terms
- 1 Introduction
- 2 Compilation: setting the right foundations
- 3 Composition: designing for needs
- 4 Conducting process evaluation
- 5 Conducting economic evaluation
- 6 Conducting impact evaluation
- 7 Analysis, reporting and communications
- 8 Emerging challenges for evaluation and evaluators
- References
- Annex A The ROTUR framework for managing evaluation expectations
- Annex B Ready reckoner guide to experimentation choices in impact evaluation
- Index
- Social Research Association Shorts
Summary
Evaluation methods and the plethora of theories surrounding them can be mystifying for the uninitiated. It does not have to be that way.
This book aims to help students, researchers, professionals, practitioners and anyone else coming new or inexperienced to evaluation, as specifiers, designers or users, to cut through the jargon. Its approach is practical, not theoretical, and its starting point is the dilemma set out a quarter of a century ago by Michael Scriven, the British-born, Australian polymath and philosopher, who said:
Practical life cannot proceed without evaluation, nor can intellectual life, nor can moral life, and they are not built on sand. The real question is how to do evaluation well, not to avoid it. (Scriven, 1991, p 8)
A past president of the American Evaluation Association, Scriven's how vs if challenge drew on four decades of practical experience and an already deep legacy of evaluation developments in health, life and physical sciences. Social science has come relatively late to this challenge, and has been slow to resolve its discomfort with how well the evaluation legacy from other disciplines fits the social world. Much of the methodological confusion, and many of what at first may seem contradictory theories facing new evaluators today, stem from this.
This is not to say that the evaluation toolbox remains empty; many social scientists facing their first cautious steps into design, delivery or use of evaluation might think it is overfull. What is needed is not a more compact toolbox but a practical and joined-up way of thinking about how the tools available best fit different needs and circumstances. For perhaps two decades or more, social scientists in North America and Northern Europe in particular have been struggling with trying to define this, often from the standpoint of their own disciplines. The extensive literature that has resulted has been scholarly, often thought-provoking and sometimes influential, but outside the sphere of experienced evaluators and academics this has too often added to confusion, not diminished it.
If confusion was not enough, the demands today on evaluators have accelerated exponentially. Social scientists working in policy and practice find that ‘doing evaluation well’ brings challenges that could not have been anticipated 25 years ago. Decision makers’ timeframes for evidence collection and analysis shrink; budgets diminish; more and more is expected for less, and in less time.
- Type
- Chapter
- Information
- Demystifying EvaluationPractical Approaches for Researchers and Users, pp. ix - xPublisher: Bristol University PressPrint publication year: 2017