The phrase ‘value for money’ has considerable currency in the prevailing economic climate.

It is used often without consideration of its meaning and the intent underpinning it. Here we look at weaknesses in the current system, and areas for further discussion.

Commissioners buy in services from providers to deliver care for individuals. These providers can be in-house (although these are few and far between), private sector or voluntary sector.

So, how do commissioners establish value for money at the point of purchase, and of equal importance, how is VFM measured post-placement?

Taking the point of purchase first. There are several inputs at this stage, which affect a commissioner’s decision-making process:


An individual whose mental health is such that it is felt that he is a danger to society and/or himself will be a higher priority than somebody suffering the effects of early onset dementia but supported in their own home by family. Albeit that may be failing, so the extent of the deterioration may be exacerbated by a lack of urgency affecting future treatment, outcome and cost.


Where there is a choice available (or rather, visible to the commissioner - see below) then comparative cost is a consideration. However, this is a complex consideration as it should be viewed alongside the potential for outcomes, which will affect the individual’s long-term requirements. So, this should really be more a consideration of value in terms of return on investment.

Placement availability (choice)

On the face of things, this is a fairly obvious consideration. Unfortunately, in reality not so: assessment of choice relies upon clear visibility of service availability. Given the raft of pressures weighing on commissioners it is not always easy to have a complete view of all services available, locally and/or nationally. When considering the more specialised service provisions (and so the more expensive) it becomes even more difficult to enable a contemporary view.

Availability of funding

Budgetary pressure affects decision-making. Mostly the consideration of ROI falls aside in favour of shorter-term decision-making, which is weighted more heavily on simple cost analysis.

It is worth noting here that the reality underpinning this position is that patients left untreated will deteriorate quite rapidly and so will require urgent attention at some point in the future. Urgent attention often impacts on the decision-making process, resulting in more expensive treatment and/or the reduced likelihood of positive and controllable outcomes. Most mental health professionals refer to this as the ‘revolving door syndrome’ where patients are seen rotating through services over the course of their lives.

Take these categories individually and you have some headaches. Overlay them, and apply that to a raft of applications for funding/placement of varying need and complexity, and you have a permanent migraine.

From a commissioner’s viewpoint, queries about VFM are often received from a perspective of justifying individual decisions: this is a retrospective act given the difficulties involved in measuring and benchmarking.

This isn’t a commissioner beating narrative. In fact, it is the opposite. My experience of commissioners is that they are committed professionals who have the unenviable job of trying to account for budgetary spend while taking into account the most complex of human need (alongside managing their assessment teams and inputs from the individuals, families, doctors and fellow mental health professionals)

When considering VFM in this arena, the only real conclusion is that it cannot really be adequately measured by commissioners in the system as it currently stands (please forgive the generalisation, there are probably some fine examples where the trend is being bucked). Pressures to provide measurement through financial accountability, in my experience, just result in internal friction between departments and budget holders. Friction in financially grounded systems is a healthy thing, but only when in context. By its very nature, excessive friction will always lead to the development of inertia within a closed system.

Consider that the outline above represents one commissioning team. If one tries to assemble a regional or national view, the fog thickens the eyes.

Now let’s look at the second part of the VFM process – post-placement.

We could spend a considerable amount of time analysing this part of the process but the facts are quite simple. Once a placement is made, its visibility is usually dependent on a couple of pressures within the system.

Placement failure

This relies in most cases on the provider flagging that the placement is failing, therefore raising the need to re-assess the patient. The placement then re-enters the system described above with urgency usually ranking highly on the measurement scale.

Routine review

Reviews are carried out routinely, often six-monthly. These reviews are mostly conducted as clinical reviews of the level of care. There is a financial outcome, but from my experience it is a consequence rather than a prime consideration. Clinical reviews are, of course, conducted by clinicians who, by their very nature, tend not to be commercially-versed (and shouldn’t be expected to be).

An important point to note here is that the maxim of ‘what gets measured, gets done’ applies. In recovery-based mental health services in particular there is traditionally little real connection between measurable outcomes and continuation of service provision written into contractual arrangements: you only get answers to the questions you ask.

The result of the current approach is that instead of meaningful reporting with the benefit of intelligent feedback we have defensive accountability for clinical decision-making. In my opinion, this is an inelegant and inappropriate pressure to apply. Most statements I have seen which claim that VFM has been demonstrated are not anchored to points of common stability. Certainly if you look at cross border measurement between different bodies then there is little, if any, commonality worth relying upon.

How should we measure VFM?

Before we try to answer that question, the more important query is why do we want to measure VFM? What data are we looking for, what do we plan to do with it, and what are the benefits?

There are a number of reasons that immediately spring to mind: for the purpose of accountability; to demonstrate fidelity; to justify decision-making. The problem with these (more usual) reasons is that they are defensive and so by their nature do not on their own deliver constructive outcomes.

These are important outcomes of measuring true VFM but they should not be the primary objectives. The real reasons driving the measurement of VFM should be:

  • To enable the delivery of more elegant solutions

Outcome: improved systems; improved decision-making; lower costs. This results in the delivery of short-term improvements whilst feeding long-term decision-making.

  • To enable strategic planning

Strategic planning serves to improve control of market places. This is not to be confused with state control; rather purchasers (customers) having a strategic say in the relationship with providers (suppliers) and a leading role in the development of markets. Simple economic principles. The fractured nature of these specialised markets has historically led to excessive control by suppliers.

The involvement of the private sector is important when considering a balanced and healthy marketplace but that does not excuse poor market control by customers. Historically, the development of services within this area of healthcare is led solely by the private sector with the public sector (i.e. the customer) filling a passive role. Passivity has a cost.

So - back to the main question, how do we measure VFM? The good news is that it is possible, the not so good news is that it isn’t easy (like most things that are worthwhile).

The starting point has to be to decide where to inject the measurement process. Traditionally, as outlined above, this is an internal pressure. I would suggest that it should be considered almost as a parallel process but with synaptic links to enable intelligent feedback between component parts.

The process itself has to have real objectives (by that I mean measurable).

It should be contrived in a positive manner. It should be challenging but should demonstrate its worth through the delivery of meaningful results to commissioners, patients and providers.

This sounds sound like a mission statement under development, but I can confirm that it is achievable and in fact my organisation is achieving it in our work in the North West of England with a PCT and city council. This work over a two-year period has delivered over £1.5m of immediate and (mostly) repeating savings without the loss of any placements and, in most cases, improved relationships between providers and commissioners.

This project has been approached from a bottom up perspective, working front line between commissioners and providers. After two years of fairly intensive work (and pretty good results to justify it) a more strategic view of the market is developing, and the balance of control is shifting.

To measure VFM you have to firstly establish what it is and decide what you want it to do for you.

You have to be prepared to accept challenges to your presumptions of what matters and what doesn’t.

Commissioners need to accept that commerciality matters and that based on the philosophy (perhaps too bold a description) outlined above, considerable value can be added to quality of care and decision-making processes through an investigative approach working upwards from individual placement.

A parallel process with intelligent feedback links into commissioning systems can deliver real results.