General agreement all round then that scrutiny is a legitimate part of running a publicly funded service.
Which is just as well, because there really is 'no escape from data', as Nigel Edwards says. The introduction of the new quality accounts will keep the ball rolling.
It will be interesting to see whether a new system designed by the DH and implemented by local clinical teams, having freedom to decide their own measures, improves things not only for patients, but also for clinicians and managers too. Will different measures at different levels produced by different teams in different organisations enable us to make comparisons across services? Will it mean less work for the people producing the data? Will the improvements logged enable patients, and inspectors on their behalf, to differentiate the degrees of improvement made? Some may be vastly improved, for example, yet remain relatively poor compared with others. Others could have barely improved at all, but this could be because performance is already exceptionally good and there's nowhere to improve to? Will the system enable us to understand these differences, in order to guide the next steps?
Will this much-needed context therefore be built in? For that to happen, outputs need to be comparable across services and organisations. I fear there are a lot of wheels going to be reinvented before that stage is stumbled upon. And a lot of data produced. My, you are going to have fun.