Published: 04/04/2002, Volume II2, No. 5799 Page 26 27 28 29
Just as the Commission for Health Improvement completes its second full year of activity, it is facing a major transformation.At its launch CHI emphatically repudiated the notion that it was an inspectorate, anxiously distinguished itself from bodies such as Ofsted (the Office for Standards in Education) and stressed its developmental role.Now, however, it is official: CHI is an inspectorate.
In the NHS Reform Bill, currently going through Parliament, CHI is specifically charged with inspecting the service - as distinct from reviewing the clinical governance arrangements of trusts - and making recommendations about the measures needed to deal with poor-quality providers.Additionally, CHI's role will expand. It will include an office for information on healthcare performance, become responsible for the 'star'system of assessing trusts, and produce an annual report on the state of the NHS.Last, ministers want CHI to become the overseer of the multitude of inspectorates in the NHS.
It is a formidable agenda for a body that is still at the stage of institutional adolescence.Can CHI cope? Or are ministers pressing it into a premature institutional adulthood?
Two years of self-invention Much of CHI's energies in its first years have been absorbed by the act of self-invention.CHI is the fastest growing institution in the NHS. Its budget has more than doubled, from£12m in 200001 to£25m in 2002-03, and is set to increase further.Staffing numbers have also shot up over the same two years, from 180 to more than 300.The current core staff of 300-plus includes 50 review managers who orchestrate the inspection visits to trusts.
Staff have a variety of career backgrounds and qualifications: managerial, medical and nursing.Many of them have worked in the NHS or the Department of Health, some have come from the Audit Commission, a few have academic backgrounds.CHI also has a pool of 500 part-time reviewers from whom inspection teams are drawn.Most are seconded from the NHS; some are lay people.The proportions reflect the composition of the reviewing teams.Thus a typical team will include a doctor, a manager, a nurse, an allied health professional and a couple of lay members.
What kind of methodology?
From the start, CHI has had to reconcile conflicting pressures and goals.An impatient DoH has set tight targets for the production of clinical governance reviews:500 such reviews are to be completed by 2004 (in addition to special investigations and studies of the implementation of national service frameworks).
Further, ministers cited Ofsted reports as the kind of outputs they wanted.CHI, on the other hand, needed time to develop and test its reviewing methodology.And, above all, CHI wanted to be accepted by the NHS and seen as helpful to those being reviewed rather than judgemental in its style.Hence its repudiation of the inspectorate label, and its indignant assertion that it was not an Ofsted for the NHS.Hence, too, its emphasis on stressing the positive in its reports, and citing examples of good practice worthy of imitation in the rest of the NHS.
Compounding these pressures was a fundamental tension in CHI's remit.The reviews, according to the standard formula of all CHI reports, are designed 'to provide independent and systematic scrutiny of the clinical governance arrangements in each trust'.Clinical governance, in turn, is defined to mean 'the organisation's systems and processes for monitoring and improving services'.The critical components of clinical governance are summed up in the 'seven pillars of wisdom'- a set of rather vacuous categories waiting to be filled - around which all review reports are organised (see box).But how far should CHI go in testing whether the systems and processes are being implemented? Is its function to examine whether the appropriate mechanisms are in place and working, or should it be reviewing the quality of care being delivered?
How far, indeed, could it make any judgements about the effectiveness of clinical governance without looking at the end product - the services being delivered to patients? More fundamentally, are the seven pillars - a DoH invention - appropriate for judging a trust's performance?
CHI's methodology is something of a compromise between looking at mechanisms and testing whether they deliver in practice.
1The first step in any review is to collect and analyse data about the trust's overall activities and performance from a variety of sources.
Informed by this analysis, CHI reviews then look in depth at the activities of three clinical areas.CHI also seeks the views of staff, patients and external individuals and agencies with an interest in the trust's delivery of healthcare, such as GPs and community health councils.So, in effect, CHI seeks to test the effectiveness of clinical governance by digging bore holes into the organisation.
In designing its methodology CHI also had to address a more general regulatory dilemma.
2Should it specify the standards and criteria to be used by its reviewers? Or should it leave them discretion on how to interpret the evidence within the broad framework of the seven pillars and the sub-categories within them? Each regulatory strategy has its own drawbacks.
Specifying standards and criteria risks encouraging box-ticking pedantry.Allowing discretion risks creating confusion and inconsistency.Contrary to many other regulatory agencies - not only Ofsted but also CHI's Scottish equivalent, the Clinical Standards Board for Scotland - the commission opted for the second risk.
But while CHI shared many dilemmas with other regulatory agencies, there was one problem which it faced in a peculiarly aggravated form: the heterogeneity of the NHS.Given the multiplicity of clinical teams, departments and services within any trust - and given that their quality may range from the outstandingly good to the outstandingly bad - could any team of reviewers come to a general conclusion about the organisation? If a trust failed to collect and analyse data about its own clinical performance (as many did), then indeed a team could confidently conclude that the machinery of clinical governance was inadequate.
But even if the machinery was adequate, it did not follow that it was operating effectively across all services, let alone that the quality of those services was satisfactory.
But how, in any case, was quality to be judged? Most data about technical quality, as measured by outcome indicators, raises as many questions as it answers: CHI has tended to use such data as a challenge to trust managers - rightly seeing managers'ability to answer the questions raised as a critical test of their competence - rather than as a direct measurement of performance.But there is, of course, a further dimension, expressed in CHI's reiterated emphasis on putting the patient's experience at the heart of its work.Easier said than done, given that different patients have different experiences at different times even within the same trust. In the event, CHI adopted a variety of strategies.Meetings are arranged, and views invited, from patients and carers as part of each review; further,200 diaries are distributed to discharged patients.With predictably low response rates, however, the patient experience has proved elusive: the methods have provided fragmentary clues rather than a complete picture.
Compounding the other methodological challenges, there was another special problem for CHI.This was the variation in the size and characteristics of the trusts being inspected.CHI has used the same timetable and methodology for all trusts, whether they have 1,260 beds distributed across four sites (Oxford Radcliffe) or 285 beds on one site (Liverpool Women's).
There is no evidence of any systematic relationship between the nature of the trust and the scope and scale of the inquiry: for example, fewer interviews were held with staff in the former, large trust than in the latter - 61 as against 77.
Reviewing the reviews
In its second year CHI's review assembly line has gone into full production.Over 100 review reports have been published.This is in addition to a handful of special investigations and a review of the implementation of the national service framework for cancer care. In what follows we concentrate on the main focus of CHI's activity: the review reports.Specifically, we draw our illustrative examples from the 25 reports published in the last two months of 2001 and the first month of 2002.
In concentrating on the later reports, our expectation was that by this stage in the development of the reviewing process, the commission would have had enough time to iron out problems inherent in launching an innovative project.However, this proved to be an over-optimistic assumption.Although all reports use the same format and headings, they turn out to be extremely variable.For example, there is no consistency in the basic data provided: no standard profile, as it were. In some reports, data about performance is broken down by specialty or even by individual surgeon. In others, it is not. If this represents a failure by the trust to provide relevant data - rather than an omission by CHI's analysts - then this surely ought to be explicitly noted in the reports.The inability of trusts to produce such data might, after all, be thought to be a highly significant indicator.
The reports also underline the difficulty for reviewers in defining and capturing the patient experience.Once again the reports are only consistent in their inconsistency.Different issues are flagged in different reports, and it is difficult to be sure whether this reflects differences in the environment being studied or differences in the review teams.The latter seems more plausible.For example, car parking is flagged as a problem in some reports (Burton), while it is ignored in others (the Whittington - where parking is notoriously difficult).Again, although most reports are peppered with quotes from patients about the quality of food, staff attitudes and so on, it is difficult to judge how representative (or otherwise) these comments are.
A possible strategy for assessing the patient experience might be to use proxy indicators for the quality of service offered - for example, the incidence of hospital infections or bed sores.Only exceptionally, however, are these reported in reviews (Salford Royal).Even these exceptional reviews rarely put the figures into a wider, comparative context.Again, if trusts are unable to supply such data, this would surely be a highly significant finding in its own right.
The failure to put data into context, and to critically analyse it, cuts deeper.Consider staffing: staff shortages are a recurring theme in reports.These frequently note complaints from staff or patients about shortages, creating stress for those in post.
Sometimes the reviews themselves voice concern.For example, the Royal United Hospital, Bath, review reports that 'nurse staffing levels in several clinical areas were a concern to CHI'.But the reviews lack precision and authority in their comments.Nor do they provide any analysis of how staff are actually deployed in the trust: that is, the efficiency with which staff are used.Staff sickness absence and turnover rates are frequently - but not uniformly - given.However, only rarely are the figures compared with those of comparable trusts.Nor do the reports agree on whether performance should be compared with other trusts at one point in time or should be examined historically within the same trust (as CHI's emphasis on continuous quality improvement might suggest).
Similar points could be made about the use of other data.
Assessing research performance clearly presents difficulties for reviewers.Rather desperately, the reports tend to cite the number of research papers published and make ritual recommendations that users should be included in research management - not a self-evident proposition.More generally, patient and public involvement is taken as axiomatically desirable, and ritually invoked.As in the case of research, its link with quality of care is assumed (very dubiously) to be self-evident. In any case, with patients' forums about to be installed in every trust - and charged with reviewing services - there would seem to be a danger of duplication.
Issues picked out as worthy of attention for the NHS as a whole tend to be idiosyncratic when they are not trivial.The prize for the latter undoubtedly goes to the Kettering report which gives as an example of notable practice the use of moulds 'to present the puréed meals in the shape of the original food - for example, puréed lamb chop is presented in the shape of a chop'.
So far we have concentrated on the weaknesses of CHI's review methodology in order to identify the issues involved in devising a more effective one for the future.The weaknesses make it difficult to use the reports either to compare hospitals or to give a picture of the state of the NHS as a whole.Nor do the marks awarded shed much light: CHI warns that, since the individual components - the seven pillars of wisdom - are not weighted, they cannot be added up to provide a summary score.And, in any case, the marks conflate two crucially different dimensions of performance - strategy and planning, on the one hand, and implementation at the operational level, on the other.However, there is no doubt that individual reports have illuminated some dark corners of the NHS.They have given visibility to the activities of trusts.
In one respect CHI reports have, clearly, had an immediate impact.They have blighted several managerial careers.Beyond that, it is difficult to assess the impact of its activities.
CHI's own criterion for measuring its success is that by 2004, it 'will have made a significant contribution towards achieving improvements in the quality of NHS patient healthcare and social care'.
3However, assuming that there will indeed be improvements, it will be almost impossible to assess how significant CHI's contribution has been.
For CHI's activities are only one element in a complex web. Its role is diagnostic, while other agencies are responsible for treatment.Following the reviews, trusts draw up action plans, outlining how they propose to remedy any deficiencies.Some appear to do this reluctantly and slowly.Only three action plans were available on the CHI website at the end of February for the six trusts whose reports were published in June 2001. In any case, drawing up action plans signifies little: what matters is their implementation.Here CHI has had no direct role so far: responsibility has been divided between the Modernisation Agency and the (late) regional offices.The former's role is to provide consultancy advice; the latter's was to monitor progress.
Given that regional offices have been preoccupied with their own demise, they do not appear to have been zealous in following up CHI reports.
And it remains to be seen how the new strategic health authorities perform.
Asking what the impact of CHI's activities has been is to put the wrong question because it is unanswerable.The real question is whether CHI gives sufficient ammunition to those in the NHS who want to promote change. In other words, CHI is likely to be most effective when a trust chair or chief executive, often a new one, already has an agenda of action and can use a review report as leverage.As with all inspecting bodies, impact reflects as much the local balance of power as the regulatory activities.
4One other outcome of CHI's first two years requires noting: the way in which it is perceived within the NHS.There are complaints about particular aspects of CHI's work among reviewed trusts.
These include some criticisms common to all regulatory agencies: for example, disquiet about the perceived inexperience of some reviewers and irritation about the amount of data requested.More significant than this sort of generic regulatory grumble, CHI appears to have achieved a general acceptance within the NHS.
5There is a sense that its activities are legitimate if not always helpful. In short, CHI is widely seen as an ally rather than as an adversary: a significant achievement.
Pointers for the future
The commission is addressing many of the weaknesses identified in our analysis.A new, slimmer review process is about to be launched, designed to reduce the investment of time by both reviewers and reviewed.The process of collecting and analysing data is being sharpened up.Patient diaries are being dropped.Staff and patient surveys are to be introduced.There will be a new emphasis on self-assessment by trusts. In short, the methodology - two years after the organisation's launch - is still evolving.
There are two ways of interpreting CHI's process of continual revisionism.The first is to see it positively as a mark of CHI's determination to show itself to be a learning organisation.
The second is to see it negatively as demonstrating CHI's difficulty in devising an appropriate methodology: in particular, the difficulty of building up the necessary intellectual and technical capacity.
This last point carries a warning for the future.The new responsibilities about to be imposed on CHI will require not just adapting existing methodologies but devising new ones.
Inspecting against standards - laid down by the National Institute for Clinical Excellence, the royal colleges and others - will be a very different matter from reviewing clinical governance.
Devising indicators that will measure the various dimensions of performance - quality as well as efficiency - will be an enormous conceptual challenge.The best hope for the future ability of CHI to deliver lies in a self-denying ordinance by ministers: restraint in the demands they make on CHI and the timetable they impose on it.
1Commission for Health Improvement.A guide to clinical governance reviews in NHS acute trusts.CHI, 2001.
2Day P, Klein R, Redmayne S.Why regulate? Bristol: Policy Press 1996.
3Commission for Health Improvement Corporate Plan, 20012004 .CHI 2001.
4Day P, Klein R.Auditing the Auditors.The Stationery Office/The Nuffield Trust 2001.
5Mitchell E.Survey of trusts reviewed by CHI.NHS Confederation, 2002.
CHI is the fastest growing institution in the NHS.
It has been generally well accepted by the NHS.
Its methodology is a compromise between looking at mechanisms and testing whether they deliver in practice.
There is no consistency in the basic data provided in CHI reports.
It remains to be seen what effect CHI reports have on improving patient care.
Patricia Day is senior research fellow at Bath University. Rudolf Klein is a visiting professor at the London School of Economics and the London School of Hygiene and Tropical Medicine.