I read with interest 'All quiet on the front line' (pages 24-26, 14 May), as performance indicators are, on the face of it, a very useful tool and appear to provide a sensible transition from management theory to management practice. Defining a standard, measuring performance, comparing performance to the standard, identifying reasons for failure, fixing them and re-measuring is widely used in many industries.

But up to now, there has been a concentration largely on output measures as indicators of performance. Waiting times, waiting lists, throughput, occupied bed days and unit costs, for example, have been high on the list of 'most useful' indicators.

It seems bizarre that the most 'successful' hospital in the country would be the one that killed everybody within 10 minutes of arrival. There would be little - if any - waiting list, waiting times would be well within Patient's Charter standards, throughput would be immense, at least in the short to medium term, and unit costs would be unbelievably low.

Although extreme, this illustrates the difficulties associated with performance indicators.

The NHS has been wedded to indicators that, treated seriously, would lead to it achieving the very opposite of that for which it was created. We don't even distinguish between deaths and discharges. We don't care how they leave as long as they do, and quickly.

Unless sensible indicators are chosen in the first place, the method is useless as a management tool. Therefore, to ensure that a balanced picture is given, a wide range of indicators would need to be chosen, and standards set for each. This is another major problem as indicators would have to cover all aspects of service delivery, and could run into hundreds if not thousands.

They must then be capable of measurement and articulation into standards. Given the paucity of systems designed to do this in most trusts, this will be difficult and expensive to achieve - even were the money available. Equally, in clinical terms, as only a small percentage of interventions are 'clinically' proven, and anyway clinicians are still unlikely to respond positively to either managerial inspection or personal criticism, how are clinical indicators to be defined?

If performance indicators are to stand any chance at all of achieving improvements in the delivery of care, we would also have to be certain that successful achievement of the standards delivers the desired result. This, of course, assumes that we know what that result is. Neither the NHS, nor the providers within it, have a clear enough understanding of what it is trying to achieve to be able to do this.

Most improvements in public health have been achieved through changes in environmental factors - better housing, sanitation, nutrition - not through the treatment of ill people. In addition, it is now generally accepted that there is a clear link between poverty and poor health.

If this is true, and if the objective of the NHS is accepted as broadly 'the improvement of the health of the population', the most significant contribution that the health service could make to achieving this aim is actually to employ more people and on more money.

Now there's a thought.

Andrew Evans

Group business manager

Public Health Laboratory Service

Wales