There is increasing emphasis on the need to evaluate what we do and to ensure we do it in the most effective or cost-effective ways. Evidence- based healthcare has focused principally on clinical (especially medical) activities. But the way healthcare is organised, financed and managed may well have as much effect on costs and the quantity and distribution of health outcomes as the treatments used. It is unfortunate, therefore, that the national research effort, including the NHS research and development programme, has largely ignored the wider context of health services compared with its large investment in health technology assessment and clinical research.
This is often reflected in books summarising the results of research and surveying evaluation methods. It is refreshing, therefore, to come across this book, written by the professor of health policy and management at the Nordic School of Public Health, which takes a much broader look at evaluation. John 0vretveit not only looks at the more traditional evaluation questions, such as the effectiveness or cost-effectiveness of a treatment, but also considers how to evaluate programmes, projects, policy, and the impact of system reorganisation.
The book identifies six main types of design: descriptive, audit, before- after, comparative experimentalist, randomised controlled experiment, and studies of interventions to a health organisation which look at their impact on either providers or patients; and four evaluation perspectives: experimental, economic, developmental and managerial. These are then discussed with reference to case studies. Some of the distinctions drawn are rather arbitrary, but it provides a useful idea of the breadth and variety of approaches available. It should act as a counter-weight to managers who, for example, avoid reflecting critically on their actions on the basis that experimental methods are not appropriate.
An important feature of the book is that it not only looks at the range of methods available but gives sensible and structured advice on organising an evaluation. The text works carefully through the phases of initiation, reviewing existing knowledge, finalising design data collection, data analysis and reporting, judging value and deciding action and evaluator self-review. Some common practical and political problems are usefully discussed. This chapter highlights the need to understand the policy and political context of the area being researched, the uncertainty which unacknowledged agendas can cause and the need to identify 'fuzzy' boundaries, 'wobbly' interventions and 'ghostly goals'.
The New NHS white paper has moved quality of health services to the front of the policy debate, and this book has a useful chapter dealing with the difficulties of evaluating quality.
Evaluations should not be carried out without an idea of how the results could inform practice or policy. Of course, the results of any one study will only be one input into the decision-making process and may not be sufficient basis for change. The final chapter considers how more use can be made of evaluations.
I recommend this as one of the best recent introductory texts on health service evaluation. It is clearly written, uncluttered by methodological details that can be found in other texts and is broad enough to be relevant to a wide range of people. Useful appendices provide learning exercises and a critical appraisal framework for analysing an evaluation and assessing evidence.
TREVOR SHELDON
Director of the NHS centre for reviews and dissemination, York University.
No comments yet