Incentives matter more than systems
There is a considerable literature about this which goes back 50 years to economic reviews of the Soviet Union. The interplay of a strong central control (almost “terror”) and partial measurement of an issue make the apparatchiks “hit the target and miss the point”. So, apocryphally at least, nail factories in Soviet Union first made billions of tacks (when given a numbers targets) and then made 6 foot long nails (when given a weight target), neither, of course, were fit for purpose supplied. In the NHS, A&E departments employed “hello nurses” when given a target of how quickly to first deal with patients entering an emergency room, then designated corridors as wards when given a target of how quickly they were processed through casualty. (Before anyone thinks I'm making a party political point I'm not, the target for quick dealing with a patient on first arrival dates back to the Patient's Charter – introduced by the Conservative government of John Major).
An immediate conclusion is that this sort perverse behaviour follows and is the consequence of having a command/control system – and the analogy of the Soviet Union only encourages this.
As I have alluded to in an earlier posting, however, I am starting to wonder if there really are different behaviours in a highly marketised system. The precise mechanisms by which incentives are rewarded are different, but the incentives themselves remain the same.
My work in the last week has involved some detailed study of the HEDIS indicators, the measurement of healthcare quality used in the US. In theory these should have two advantages over the politically-set targets.
First, they are more clinical than managerial, concentrating on the processes, provision and outcome of care at a disease specific level, thus in theory each has less widespread influence, than a target that covers the whole of a given hospital.
Second, their method of influencing services are more subtle. Rather than being a 'must meet' target set by one central purchaser, they act (inasmuch as they act outside of the health systems themselves) as a kind of “consumer reports” for the numerous corporate purchasers of healthcare; therefore, in theory, this is one piece of information that a corporate purchaser may use in choosing and negotiating which health plans it will make available to its employees. The agency of approval/disapproval is thus a less direct “market”one.
However, in conversations with various experts concerning how these indicators work, I have discovered that exactly the same concerns apply to the working of HEDIS measures as to NHS targets:
having to comply with measurements that you don't agree with because the purchaser insists on it (for govt, read corporations); and,
distortion of clinical priorities – particularly away from preventative work - because of the unintended effects of indicator definition;
Long-term students of the publication of cardiac surgery outcomes in New York state will remember that gaming in reporting of co-morbities and cream-skimming (and dumping sicker patients onto the Pennsylvanian health system) were reported negative behaviours.
So maybe the moral is that the unintended consequences and the regrettable behavior is more universal than situational. Even good measures can lead to bad behaviour, and neither markets nor Stalinism can guard against this.
This poses two questions
Why is this?
Why bother measuring anything then?
The answer to the first is I think bound up in a better understanding of how incentives work inside complex systems like healthcare - an attempt to suggest how this might work will be the next post on this blog
The answer to the second is simple: For better or worse, the human mind equates what matters with what it can measure. It may be imperfect, it may have perverse effects that we need to ward against, but if we stop trying to measure it, however imperfectly, what we say is that the quality of healthcare doesn't matter.
0 Comments:
Post a Comment
<< Home