Skin in the game
As I have alluded to already, the concept of moral hazard, or more properly the need to make sure that individuals have an incentive to minimize unnecessary use of health services is a strong driver of US health policy – and a key distinction from British health policy.
It is, of course, reasonable to seek to apply limits to healthcare expenditure, given its capacity to skyrocket. It is hard to think of a sector more prone to supplier-induced demand. And this, particularly in the field of pharmaceuticals, is a fair description of where medicine is at, in all industrial and post-industrial economies. Seeking to limit direct patient demand, or make that demand more focused and efficient is one strategy to achieving a limit to healthcare expenditure.
Broadly put, the theory of moral hazard states that if you make a good or service free to the consumer they will use more of it, and use it frivolously without having a genuine need that requires satisfying By putting some of the cost onto the consumer (making sure that they have “skin in the game”) this frivolity of consumption is reduced. A masterly review by Malcolm Gladwell in the New Yorker has identified the history and limitations of this thesis in American health policy.
Since first discussed seriously in the context of healthcare in the late 1960s, the fear of moral hazard has had a profound influence on how the American system seeks to limit healthcare expenditure. This is increasingly important given the increase in expenditure since 2000 as the HMO model of managed care has been increasingly rejected. Understanding the importance of this requires a little history and context. In most instances, historically, reimbursement for healthcare providers has been on a fee for service basis. Reasonable enough, except that this creates a situation where the incentive for providers is to do more where the health benefit is marginal. This, inevitably will lead to increased cost. A good example of this is the enthusiasm for diagnostic testing, or a lot of treatments that would be seen as on the cusp between lifestyle and clinical in the UK. This is one major cause (there are others) of a very high expenditure on healthcare in the US without necessarily having better results to show for it – see the Commonwealth Fund report referred to below.
An obvious economic response is that there is a need to discourage demand for services, but the question then is "demand from whom". One approach is to diminish the demand for services made by doctors' choices. Simplifying somewhat, most HMOs sought to do this by providing a capitation funding for each patient, thereby providing an incentive not to use services which were likely to have only a marginal benefit.
The other approach (and, given the rejection of the HMO model, explicit "socialized" rationing, or a single payer system, about the only one left) is to reduce the demand for services from patients themselves, through putting an upfront charge for the use of services. The logic is that consumers (and in this case this would be an appropriate word) will only use services that they need if they have to pay for them. As the HMO experiment has been increasingly rejected (at least in its pure, captitation-based form) this has become an increasing focus for cost-reduction (or more accurately cost-shifting) policies. This may be done through creating co-pays for basic services such as prescriptions and doctors' office visits, it may be done through a high annual deductible (i.e you pay for the first $500 dollars of healthcare per year) or it may be done through limiting coverage (i.e you have to pay for certain services yourself). This is reaching its apotheosis in policy terms with the creation of high-deductible or “consumer-directed” policies which are specifically designed to encourage consumers taking a risk on having to pay a lot should they be ill on the likelihood that they won't have to.
The trouble with the concept of moral hazard is that it applies imperfectly (if at all) to health services. These services cannot be sensibly considered as “goods and services” in the usual way. Hospitals are not hotels. People tend not to go there for fun, but rather because this is a required for their health. An important study by the RAND institute, nearly 30 years ago, showed that while there was a reduction in health service use once patient charges were introduced, this reduction applied to necessary and unnecessary healthcare equally - patients had insufficient skills and knowledge to work out for themselves when they should take two Tylenol and go to bed and when they really did need to see the doc. Even with the advent of the expert patient, and the mushrooming of health information via the web, there is little reason to suppose that this situation has changed.
In other words, “skin in the game” is too blunt an instrument to increase the efficiency (properly understood) of healthcare. Indeed, there is even an argument that by reducing the incentive to seek effective (and relatively cheaper) healthcare early, it may increase emergency (expensive) access to healthcare later, and may not even be, in the long run, effective in reducing expenditure overall. These arguments, of course, do not even address the issues around equity of access for deprived and marginalised groups.
So is there any form of financial commitment by patients which could be successful? Well, the track record of prescription charges in the UK could be an example of an approach which has, if not clearly limiting demand, at least represented a form of income, and has not had negative effects of people not getting prescriptions for reasons of cost, presumably because the safeguards built in for the old, sick and poor have been effective. I must also confess a certain sympathy with the idea of charging a nominal fee for non-attendees.
However, moral hazard as currently being applied in the US seems little more than intellectual figleaf for a major programme of cost shifting from major corporations as purchasers of care to individual patients. The desire of the corporate purchasers to do this is understandable - the "big three" automotive manufacturers for example are becoming increasingly uncompetitive because of their healthcare costs, and then there is the apocryphal(?) story that Starbucks spends more on health insurance than coffee (which explains a lot - not least why I go to Zoka's). Nonetheless at some point this must lead to a consumer revolt. Eventually, the right will run out of plausible "values issues" to distract low to middle income earners in the heartlands from noticing that they are being shafted. At which point what? Clinton 2? (under Clinton 2?)
It is, of course, reasonable to seek to apply limits to healthcare expenditure, given its capacity to skyrocket. It is hard to think of a sector more prone to supplier-induced demand. And this, particularly in the field of pharmaceuticals, is a fair description of where medicine is at, in all industrial and post-industrial economies. Seeking to limit direct patient demand, or make that demand more focused and efficient is one strategy to achieving a limit to healthcare expenditure.
Broadly put, the theory of moral hazard states that if you make a good or service free to the consumer they will use more of it, and use it frivolously without having a genuine need that requires satisfying By putting some of the cost onto the consumer (making sure that they have “skin in the game”) this frivolity of consumption is reduced. A masterly review by Malcolm Gladwell in the New Yorker has identified the history and limitations of this thesis in American health policy.
Since first discussed seriously in the context of healthcare in the late 1960s, the fear of moral hazard has had a profound influence on how the American system seeks to limit healthcare expenditure. This is increasingly important given the increase in expenditure since 2000 as the HMO model of managed care has been increasingly rejected. Understanding the importance of this requires a little history and context. In most instances, historically, reimbursement for healthcare providers has been on a fee for service basis. Reasonable enough, except that this creates a situation where the incentive for providers is to do more where the health benefit is marginal. This, inevitably will lead to increased cost. A good example of this is the enthusiasm for diagnostic testing, or a lot of treatments that would be seen as on the cusp between lifestyle and clinical in the UK. This is one major cause (there are others) of a very high expenditure on healthcare in the US without necessarily having better results to show for it – see the Commonwealth Fund report referred to below.
An obvious economic response is that there is a need to discourage demand for services, but the question then is "demand from whom". One approach is to diminish the demand for services made by doctors' choices. Simplifying somewhat, most HMOs sought to do this by providing a capitation funding for each patient, thereby providing an incentive not to use services which were likely to have only a marginal benefit.
The other approach (and, given the rejection of the HMO model, explicit "socialized" rationing, or a single payer system, about the only one left) is to reduce the demand for services from patients themselves, through putting an upfront charge for the use of services. The logic is that consumers (and in this case this would be an appropriate word) will only use services that they need if they have to pay for them. As the HMO experiment has been increasingly rejected (at least in its pure, captitation-based form) this has become an increasing focus for cost-reduction (or more accurately cost-shifting) policies. This may be done through creating co-pays for basic services such as prescriptions and doctors' office visits, it may be done through a high annual deductible (i.e you pay for the first $500 dollars of healthcare per year) or it may be done through limiting coverage (i.e you have to pay for certain services yourself). This is reaching its apotheosis in policy terms with the creation of high-deductible or “consumer-directed” policies which are specifically designed to encourage consumers taking a risk on having to pay a lot should they be ill on the likelihood that they won't have to.
The trouble with the concept of moral hazard is that it applies imperfectly (if at all) to health services. These services cannot be sensibly considered as “goods and services” in the usual way. Hospitals are not hotels. People tend not to go there for fun, but rather because this is a required for their health. An important study by the RAND institute, nearly 30 years ago, showed that while there was a reduction in health service use once patient charges were introduced, this reduction applied to necessary and unnecessary healthcare equally - patients had insufficient skills and knowledge to work out for themselves when they should take two Tylenol and go to bed and when they really did need to see the doc. Even with the advent of the expert patient, and the mushrooming of health information via the web, there is little reason to suppose that this situation has changed.
In other words, “skin in the game” is too blunt an instrument to increase the efficiency (properly understood) of healthcare. Indeed, there is even an argument that by reducing the incentive to seek effective (and relatively cheaper) healthcare early, it may increase emergency (expensive) access to healthcare later, and may not even be, in the long run, effective in reducing expenditure overall. These arguments, of course, do not even address the issues around equity of access for deprived and marginalised groups.
So is there any form of financial commitment by patients which could be successful? Well, the track record of prescription charges in the UK could be an example of an approach which has, if not clearly limiting demand, at least represented a form of income, and has not had negative effects of people not getting prescriptions for reasons of cost, presumably because the safeguards built in for the old, sick and poor have been effective. I must also confess a certain sympathy with the idea of charging a nominal fee for non-attendees.
However, moral hazard as currently being applied in the US seems little more than intellectual figleaf for a major programme of cost shifting from major corporations as purchasers of care to individual patients. The desire of the corporate purchasers to do this is understandable - the "big three" automotive manufacturers for example are becoming increasingly uncompetitive because of their healthcare costs, and then there is the apocryphal(?) story that Starbucks spends more on health insurance than coffee (which explains a lot - not least why I go to Zoka's). Nonetheless at some point this must lead to a consumer revolt. Eventually, the right will run out of plausible "values issues" to distract low to middle income earners in the heartlands from noticing that they are being shafted. At which point what? Clinton 2? (under Clinton 2?)
2 Comments:
Bruce - certainly that's part of it. I will look up to see if there is any data on what happened to unnecessary tests during the late 90s - and/or in HMOs vs FFS services
Best
R
My 2 cents (being a non-American):
As the postmodernists love tell us, life is a web - everything affexts everything else: The over-use of the American healthcare system, couple with the high costs etc are but a necessary consequence of the inherent narcissism of the "American Dream". It is the result of the mating of instant health with instant wealth, and is a dastardly child that nobody wants to talk about...
Post a Comment
<< Home