Statisticians have raised doubts about the validity of the nascent friends and family patient satisfaction test amid claims some trusts are “manipulating” their scores.

The test, based on the net promoter score survey used by commercial companies, will see all hospitals ranked on the basis of whether patients would recommend them to a friend or family member.

It is currently being piloted across the 46 acute trusts in the Midlands and East strategic health authority cluster and will be rolled out nationally next April.

But a senior figure at the Royal Statistical Society expressed concern about data collection methods and the lack of plans to standardise sample size, which would mean some trusts received responses from only 15 per cent of patients.

Peter Lynn, professor of survey methodology at the University of Essex and a leading figure in the society, told HSJ: “Such a low response rate would seriously call into question whether the results have any meaning at all.”

‘Such a low response rate would seriously call into question whether the results have any meaning at all’

Prof Lynn added that comparing two hospitals with vastly different response rates would potentially result in a “misleading” comparison. The Department of Health’s NHS Friends and Family Test Implementation Guidance said: “The minimum response rate for organisations is expected to be around 15 per cent; for the majority, this figure could be much higher.”

Andrew MacPherson, director of customer service strategy at the NHS Midlands and East strategic projects team, which is managing the national programme, said those responsible for quality on the project “feel the sample was statistically relevant”.

“We are nonetheless currently commissioning a major piece of research on behalf of the DH on the presentation and calculation of the test,” he added.

The society also raised concerns about the fact different trusts were using different methodologies to collect their data in the pilot exercise.

Some are using the 11-point net promoter score scale used in the commercial sector (a score from zero to 10). However, others are using a six-point scale based on patients agreeing with statements ranging from saying they are “extremely unlikely” to “extremely likely” to recommend the service, as advocated by the national guidance.

Mr MacPherson said the two scales “map” on to each other and while the collection issues could “distort” results to some degree, the SHA cluster remained confident the data was credible.

From April, all trusts will have to use the six-point scale rather than 11-point system, which should get rid of this discrepancy, he added.

Anonymous senior sources in the region complained to HSJ that “game playing” was undermining the credibility of results.

One senior figure said trusts were “manipulating” scores in order to achieve better rankings.

The source added: “I think the test is fundamentally flawed. The national patient survey is not gameable but this is. The [annual] national patient survey is statistically valid; this is producing results which are statistically incredible.”

Mr MacPherson dismissed the accusations as “anonymous sniping”. He said the test was “currently being audited as part of a planned regular review of the system which is only just six months old and ‘settling down’”.

Papworth Hospital Foundation Trust topped the most recently published Midlands and East league table in September, with a score of 87. The Princess Alexandra Hospital Trust came bottom, scoring 35.