Ratings for hospitals could improve accountability, and choice among users and commissioners, if they are simple, valid and reported publicly, says Jennifer Dixon

Should there be “Ofsted-style” ratings for health and social care providers? This was the question set by the secretary of state Jeremy Hunt.

‘Trying to choose a care home, domiciliary care provider or a general practice is not helped by the confusing array of information’

We’ve been there before, and the added value of previous ratings relative to the costs is not clear either way. Nor indeed is the potential for ratings to have impact in the future if there were improvements in its design and use.

So what might ratings add today? There are two obvious gaps.

First, there is currently no independent comprehensive assessment of quality across all providers and across the full spectrum of performance. Second, there is nothing from a single trusted source that is simple for the public to use.

Should these gaps be filled?

The answer depends in part on what the main purpose of a rating is. There could be at least five: to increase accountability to the public, users, commissioners of care and (for publicly funded care) to Parliament; to aid choice; to help improve the performance of providers; to identify and prevent failures in the quality of care; and to provide public reassurance as to the quality of care.

Information for patients

The Nuffield Trust’s analysis suggests ratings could improve accountability provided they were simple, valid and were reported publicly. Ratings could aid choice among users and commissioners.

There is a big gap here: trying to choose, in particular, a care home, domiciliary care provider or a general practice is not helped by either the confusing array of information from different sources, or more often a lack thereof. The public is left in the dark. This is the space Ofsted fills for schools.

‘Bringing financial performance into a rating for quality risks a provider making inappropriate trade-offs’

Ratings are associated with better provider performance, but there is also the risk that “the measured becomes what is managed”. More sanctions resulting from a rating mean more perverse effects: the overall impact depends less on the rating per se, but rather the wider system in which it is embedded.

For hospitals, a “whole institution” rating is more of a managerial concept than a clinical one − an aggregate rating should include service-level information in future. That is what patients need.

A rating by itself is unlikely to be useful in spotting lapses in quality, particularly in hospitals. Here the analogy with Ofsted’s ratings of schools breaks down: hospitals are large and complex, seeing large numbers of different people 24/7, people who are sick and can die.

Put another way, the risks managed by hospitals vastly outweigh those managed in schools. For social care providers, the risks may be lower but many are still dealing with frail, ill and otherwise vulnerable individuals. There should therefore be a clear “health warning” on the rating.

Ratings road map

On reassurance, while the public may be forgiving of a rating system’s ability to spot some lapses in quality, reassurance is more likely if they could be confident there was a rapid and effective system of investigating and dealing with failure. This is where the proposed new “chief inspector of hospitals” could have a role.  

The rating should not just be an aggregate statement, but a set of “dials” covering the three “Darzi” domains of quality: experience, effectiveness and safety; and possibly the quality of governance. The rating should be based on routine data and inspections. The information should be refreshed at least quarterly. Bringing financial performance into a rating for quality risks a provider making inappropriate trade-offs between financial issues and the quality of care.

‘Ratings for hospitals might work but the potential benefits would only be realised if some key conditions are fulfilled’

Any rating should developed over time, its design involving key stakeholders, including groups representing users and the public, and drawing on existing work. We suggest a “road map” approach over the next five to 10 years.

The most obvious organisation to do the rating would be the Care Quality Commission. But the CQC would need political support, backing from the main national stakeholders, resources, time to develop, as well as stability over time.

Any new system should be fully evaluated to assess its benefits versus the drawbacks. Consideration should be given to road testing any new system to avoid any unintended consequences or perverse effects.

If the government does press ahead with ratings, it may be easier to start with ratings for social care and for general practices.

Ratings for hospitals might work but the potential benefits would only be realised if some key conditions are fulfilled, such as no extra burdens on providers given all the current monitoring requirements; support and time being given to develop the rating system; the design and presentation of the rating being sector-led with groups representing the public and users of care meaningfully involved; market research on how the ratings might be presented and used by the public; the rating system linking closely with systems designed to spot lapses in quality; and an evaluation of the costs and benefits from the very beginning.

Dr Jennifer Dixon is chief executive of the Nuffield Trust