The Darzi report looks set to unleash another wave of reforms, but those making and implementing new policies must learn from past mistakes.

As we await Lord Darzi's final report, it is a good time to reflect on some of the lessons of previous major policy initiatives. Experience suggests some questions need to be asked about any new set of policies. There are people in the Department of Health whose job it is to ask such questions, but in the past their voices have not always been heard.

The questions are as follows. Is there a clear story about what the policy is for and how it fits into the wider system? Many policies to reform systems (choice, payment by results, practice-based commissioning and so on) did not seem to have a coherent narrative and failed to engage frontline staff.

Does the solution fit the problem? The NHS seems particularly prone to "policy entrepreneurs" who advocate solutions before a proper diagnosis of the problem has even begun.

Restructuring is a stock solution that generally indicates a serious lack of ideas. Any proposal that requires major structural change should be subjected to an exacting analysis of whether the policy aims could be achieved without it - and whether the benefits have any chance of outweighing the costs.

Is the policy joined up? The fact that this is a clichŽ does not mean it is not a valid question. It has proved surprisingly hard to achieve even within the DH, let alone with other government departments. How children's trusts fit with practice-based commissioning is a conundrum I for one still don't understand.

Is the policy based on evidence or at least on design principles that have a strong evidence base? The standard of evidence for policy is different and often less complete than in medicine and it is possible to overstate how far a fully evidence-based approach is achievable.

One source of evidence for policy making is the opinions of experts or stakeholder representatives. But not all representatives are expert, nor are they necessarily representative. This appears to be a particular problem in developing policy in primary care, where anecdote often seems to dominate. For example, policy suggestions from GPs often come from the most highly motivated and professional family doctors, who may bear little resemblance to the bottom 30 per cent of GPs. On other occasions, the opposite occurs and policy is designed to deal with the tail end of the distribution. There is nothing wrong with policy that is not universal until attempts are made to apply it universally.

Have the sums been done properly? A key area where high-quality evidence and analysis is required is in the calculation of the financial impact of the policy:

  • Is there a clear return on the investment?

  • Is it cost-effective compared with other possible uses of resources?

  • Does it double-count efficiency savings already counted elsewhere?

  • Does it allow for double running costs?

  • Are the timescales realistic?

One thing the service has found particularly irritating is the claim that funding for costly initiatives is in the baseline allocation. Finances look very different from the perspective of frontline organisations than they do from Richmond House. Money may well have been put into the baseline allocation for a particular purpose, but PCTs have many other calls on their funding which were not factored in at the centre.

A second problem with this perspective gap between centre and front line is the tendency for policy to ignore the effects a policy has in different parts of the country or system. Just because the net effect of change is zero at national level, it cannot be assumed that there will not be significant winners and losers at a local level.

More questions

Have the key policy details been developed before it is announced? Experience suggests that policy in which only the name or broad outline is known is hazardous. If it really is necessary to announce something - perhaps for political reasons - make sure it is not so embellished that it is difficult to design something that fits the initial description and which will actually work.

Is the timescale sensible? Policy makers tend to be over-optimistic about the speed of change and assume implementation can be much quicker than is possible. There is also a tendency to underestimate the impact policy can make in the long term. This seems particularly to plague policy on health improvement, where short-term targets are frequently missed - but sticking with them could have a huge impact in future.

Does the policy support doing what is best in clinical or population terms? It should not create incentives to act against the interest of patients. No incentive should create the risk of clinical decisions being distorted because a clinician has a financial interest in making them, for instance. It is also important to think about segments of the patient population, not to treat them as an undifferentiated group. So for example, continuity of care should not be undermined by reforms aimed at patients for whom it is not important.

A related error to solutions that do not fit the problem is problems that do not fit the solution. There is a tendency to overload policy instruments with too many objectives, to try to fix problems for which the policies were not designed.

I have lost count of how many things people would like to put in the quality and outcomes framework, for instance. There are numerous things people think the tariff and payment by results system can deliver, even though many of them are contradictory or would produce such a small incentive effect that nothing would happen.

Another related problem is making policies or incentives so complex that those on the receiving end cannot interpret them or are left unsure what they should do. "Modernising medical careers" and policy relating to capital and public dividend capital are examples of this and modern equivalents of the Schleswig-Holstein problem. (This was an intractable diplomatic concern. Lord Palmerston famously claimed that only three people understood it: Prince Albert, who was dead; a German professor, who had gone mad; and Palmerston himself - and he had forgotten it).

Bureaucratic burden

Brilliant policy can easily become unstuck by implementation and policy makers by the failure of their carefully crafted ideas. There are some key questions about implementation that need to be designed into any policy.

  • Does policy add to the bureaucratic burden of regulation, reporting or inspection?

  • Will those implementing it have a manageable number of tasks?

  • Does the policy take into account the different starting points of different organisations?

  • What else are you asking people to do at the same time? Trying to change the job plans of surgeons upon whom the fate of the waiting list target depended, for example, was probably not the best combination of policies.

  • Is there appropriate flexibility?

A warning sign that there may be a problem with a policy is if it has to be implemented to a rigid centrally defined template or timetable. The onus is on the policy maker to demonstrate why this is required.

Pilots are an important but much misused part of implementation. Too often they are not properly evaluated and participants are drawn from the enthusiasts for the policy, for whom success is almost certain. If it is designed not to fail, it is not an experiment and therefore of very limited value. We need more proper experiments rather than pilots masquerading as phase one of an implementation programme.

Some care is required in constructing and implementing policy to make sure it does not undermine the way the system needs to develop. There are a few areas where nationally set standards and goals are appropriate. But in general the presumption should be that decisions about how policy is implemented and local goals and targets should be taken at a level that is close to the front line.

In a system that depends on there being an appropriate balance between its different parts, it is important policy does not appear to favour one type of organisation over another. This includes being careful not to build conflicts of interest into the roles of organisations or individuals.

Above all, reforms should be subject to rigorous independent evaluation. There needs to be honesty where ideas have not worked and both successes and failures need to be recorded and learned from.

So far the policy-making process for the Darzi review has appeared to recognise some of these pitfalls. The danger is that in final drafting, bids by parts of the system trying to protect their position or create more radical policy can inject ideas that fail these tests. The success of the programme may depend on resisting this temptation.