Andrew Sturdy, Ian Kirkpatrick and Gianluca Veronesi respond to criticism from the industry of their research on the use of management consultants by NHS trusts
We are pleased to engage in debate over the issue of the use of management consultants in the NHS.
As outsiders from the NHS ourselves, we recognise the potential value of external viewpoints in some contexts. However, we are concerned that some of the criticisms of our research from the industry are misdirected and want to respond to them so that debate can focus on evidence based policies.
As academics, we are the first to recognise that our study has limitations. Indeed, there is a lengthy section of the paper where we outline these and provide suggestions for future research. Having said that, we think that some of the criticism raised in the media is not valid.
This could be due, in part, to the technical nature of our analysis. Perhaps, we should have made our points more clearly given that the audience extends well beyond research methods experts.
Our response is below, but if you are not interested in technical issues, please skip to the last paragraph!
- The suggestion that our findings stem from the fact that trusts use consultancies because they are inefficient and not the other way round (“reverse causality”) – as one might expect, we were very concerned with this problem of “endogeneity” and this has been dealt with exhaustively by the statistical techniques employed, a point that received full support by the journal’s anonymous expert reviewers, the editor and a number of colleagues who have given feedback on our work. Essentially, we can confidently exclude that management consultants are more likely to be hired by inefficient trusts, or that their impact on the proxies for efficiency employed is affected by previous level of performance. In short, it is really NOT an issue.
- The claim that the inefficiency measure we used was not robust – we accept that the Reference Cost Index figure we used is far from a perfect measure of hospital trusts’ efficiency. As a consequence, to confirm the reliability of the findings, we used an alternative accounting measure of efficiency and the results were, as shown in the paper, exactly the same.
- The view that the research simply presents correlations rather than any indication of causality – as explained above, we go well beyond that with our analysis, including by using data over a number of years and controlling for other factors. The research is not based on randomised control trials as is common in medicine and the natural sciences, but this is not possible in this context.
- On a highly technical point around the quality of the data (in Richard Lewis’s article in HSJ), a 2014 review (by Deloitte) commissioned by NHS Improvement (still called Monitor at the time) and referred to by Richard Lewis, did find some discrepancies in the HRGs included in the RCI database. Two specific problems were pointed out, but it is important to highlight that these were observed at the HRG level rather than the actual RCI.
- More generally, as is customary when running statistical analysis, we did check whether our variables, including the RCI, were normally distributed. This is important as one of the main assumptions of linear regression models rests on normality of the data. It goes without saying that we did not observe any issue of non-normality in the data. Additionally, all the other descriptive statistics, especially those of dispersion of the data (like standard deviation), did not raise any reason for concern. Lastly, there was no evidence whatsoever of drastic swings in performance of trusts. We are, therefore, confident that, while not perfect, the RCI database provided sufficiently reliable data to run the analysis. Again, this was confirmed through the review process.
For those who are interested, the article is still available to all on open access at the Policy and Politics journal website.
In conclusion, we have sought to respond to specific criticisms, but where commentators have remained silent is in explaining why after spending around £600m on consultancy over four years, there is no sign of improved efficiency overall across 120 trusts.
We agree with Richard Lewis and others when he suggests that there should be more transparency in trusts over the use of management consultants
While “consultancy” is a broad term and not all is directed at efficiency improvements, this finding remains worthy of serious attention and further research to find out why! In an attempt to address this important issue, we agree with Richard Lewis and others when he suggests that there should be more transparency in trusts over the use of management consultants (although it remains much more open than in other sectors).
More generally, we would be very glad to re-run the research with the collaboration and support of the consulting industry or others to provide further clarity to the debate. We have also made this offer to the industry body in the UK, the Management Consultancies Association. Likewise, we shall be engaging with various other interested parties in trying to build the evidence further for an informed discussion of policy options.