Book contents
- Frontmatter
- Contents
- List of tables, figures and boxes
- Acknowledgements
- Notes on contributors
- Preface
- Introduction The politics of evaluation: an overview
- Part One Governance and evaluation
- Part Two Participation and evaluation
- Part Three Partnerships and evaluation
- Part Four Learning from evaluation
- Conclusion What the politics of evaluation implies
- Index
- Also available from The Policy Press
thirteen - Community-led regeneration: learning loops or reinvented wheels?
Published online by Cambridge University Press: 20 January 2022
- Frontmatter
- Contents
- List of tables, figures and boxes
- Acknowledgements
- Notes on contributors
- Preface
- Introduction The politics of evaluation: an overview
- Part One Governance and evaluation
- Part Two Participation and evaluation
- Part Three Partnerships and evaluation
- Part Four Learning from evaluation
- Conclusion What the politics of evaluation implies
- Index
- Also available from The Policy Press
Summary
Introduction
Area-based initiatives are, perhaps, one of the most monitored and evaluated policy arenas today. National and local evaluations have been commissioned of Health Action Zones, Single Regeneration Budget (SRB) funding, Neighbourhood Management Pathfinders, Sure Start, New Deal for Communities (NDC) and of the Neighbourhood Renewal Fund, among others. While each is different in terms of geographical areas, objectives and of funding rules, evaluators confront the same problems. There is substantial evidence and learning to draw upon from past evaluations, both locally and nationally, yet there is little evidence that these lessons have been learnt. Indeed, many practitioners appear to set out with a clean sheet, a new and unique political context and priorities or agendas of their own. Drawing upon research on two EU-funded URBAN programmes, both the subject of national and local evaluations, this chapter will seek to understand the problems and possibilities of learning from past evaluations.
The role of evaluation
Evaluation makes little sense unless it is understood as part of a learning process. Learning distinguishes it from audit, performance management and reporting. Indeed, for some, evaluations should be explicitly framed to ensure their use by policy makers and other stakeholders, including practitioners (Patton, 1997). Without wishing to engage in the methodological implications of such an approach, it is common to assume that evaluations of public services will, in some way, contribute to a body of knowledge and understanding, leading to improved policy making and practice. Weiss (1998, pp 25-8) identifies a number of ways in which evaluation might contribute to decision-making:
• midcourse corrections to programmes;
• continuing, expanding, cutting or ending a programme;
• testing new ideas; and
• selecting the best of several alternatives.
Evaluations undertaken with such objectives might contribute to organisational learning by:
• providing a record of a programme;
• giving feedback to practitioners;
• highlighting programme goals;
• providing a measure of accountability; and
• contributing to understanding of social interventions.
As such, evaluation plays an important role in developing organisations and improving interventions in the future. However, experience suggests that there are, in fact, many other roles that evaluation might play. Weiss (1998, p 22) suggests that evaluation might act as a form of subterfuge, for example:
• postponing difficult decisions pending an evaluation;
• ducking responsibility by relying on the ‘independent’ findings of a study;
• as window dressing for decisions that have already been made; and
• as a public relations exercise to draw attention to positive aspects.
- Type
- Chapter
- Information
- The Politics of EvaluationParticipation and Policy Implementation, pp. 205 - 222Publisher: Bristol University PressPrint publication year: 2005
- 1
- Cited by