Changes in the diagnosis and treatment of diabetes is a great example for understanding how healthcare delivery constantly evolves based upon new discoveries. And the history of these changes may help illuminate some thinking about health reform and the development and use of comparative effectiveness research (CER).
First, a little background on diabetes.
Diabetes Background
Diabetes mellitus (or “sugar diabetes”) occurs when the body has problems regulating the level of sugar (specifically glucose) in the blood. This can be because the body’s pancreas doesn’t produce enough insulin, or for some reason the person’s organs become resistant to the actions of the insulin that is present – or sometimes both occur simultaneously. Impaired control of glucose means that the levels get too high, which produces problems in the eyes, (leading to blindness), in the kidney, (leading to kidney failure), and in the small blood vessels elsewhere in the body, which can lead to nerve damage and low oxygen delivery to the extremities – particularly the legs and feet, (leading to amputations).
In olden times, diabetes could be diagnosed by sugar in the urine. (Medical lore says this was done by taste….) However, until insulin was discovered in 1921 there were no therapies for severe insulin deficiency. And even once insulin became available, sugar in the urine was still the way diabetes was diagnosed and monitored – usually with a dipstick that changed color depending on the sugar concentration.
It wasn’t until the 1960s that measuring blood glucose levels became possible – and only then in the doctors’ offices because the machines were large and expensive. In the 1980s machines small and cheap enough for patients to monitor their blood sugar levels at home became available. This enabled patients to start adjusting their own insulin dosages based upon their blood sugar levels. (Before this it was too dangerous for patients to significantly alter their insulin dosages because while too little insulin leads to too high sugar levels causing long-term damage, too much insulin can drop sugar levels too low and lead to confusion, coma and death.)
In more recent years it was discovered that keeping diabetics’ sugar levels near normal could prevent essentially all the adverse consequences of diabetes, i.e. blindness, renal failure and amputations. But doing this based upon finger-stick blood sugar levels even 3 and 4 times a day was tricky – and those were just single data points. So in the mid 1970s it was proposed that monitoring the amount of hemoglobin in the blood that had combined with glucose would give a measure of the average blood sugar level for the 2-3 month life of the red blood cells. (It was known that glucose irreversibly connects to the hemoglobin in red blood cells in a way that directly correlates to the blood sugar level.) This test, known as “glycosylated hemoglobin, (or HbA1C, or simply A1C), has been increasingly used over the past few decades to monitor diabetics and adjust their treatments, with the goal to keep A1C levels below 7%, since the level in people without diabetes is 4-6%.
Care Lags Discovery and Development of Innovations
Despite improved ability to monitor diabetes, it is still under diagnosed, and poorly managed. It is estimated that there are about 6 million people in the US who have diabetes, but don’t know it – which is about 25% of all people with diabetes. And in 2003-2004, only about 57% of people with diabetes had A1C levels <7%. (The medical and lost productivity costs for all people with diabetes may be approaching $200 Billion.)
And the prevalence of diabetes is increasing – and with it so are the costs of treating people with diabetes. Last year I wrote about this, and now the CDC has updated information showing the continuing growth in the number of people in the US diagnosed with diabetes:
Source: http://www.cdc.gov/diabetes/statistics/prev/national/figpersons.htm
The treatment of diabetes has also changed. After insulin was discovered, different forms and modifications where developed to change how quickly it acted, and beef and pork sources have been replaced with biotech “human” insulins grown in bacterial cultures. Many different types of non-insulin treatments for diabetes have also been developed – these act primarily by increasing insulin production from the pancreas or the action of the insulin in the body.
Which brings us back to the A1C test. An International Expert Committee from the American Diabetes Association is now recommending that the A1C test be used to diagnose diabetes. This would replace (or supplement) the traditional fasting blood glucose diagnostic test, and the A1C test would still be used for twice yearly monitoring of the adequacy of treatment for people with diabetes.
These developments in diagnosis and treatment have progressed in tandem – each leveraging off the knowledge gained from the other – with the A1C test being part of the continuing evolution of tests for diagnosing diabetes. For example, the fasting blood glucose level for diagnosing diabetes has changed over the years. It was originally set at 140mg/dl in 1979, and then lowered to126 in 1997, when it was also decided that a level between 110-126 should be considered pre-diabetic, or “impaired fasting glucose.” And in 2003 the lower bound for “prediabetes” was lowered to 100.
Why A1C Now?
While A1C testing has been used for years, there have been problems in standardizing the measurement. (This is discussed in the ADA paper linked to above.) But now A1C measurement inconsistencies, (which occur for all lab tests), have been narrowed sufficiently so that the ADA committee is recommending that an A1C level of >6.5% be used to diagnose diabetes, (for patients who are not pregnant and do not have hemoglobin abnormalities – these can change HbA1C levels significantly), and that people with A1C levels >6.0% and <6.5% be considered to have “subdiabetic hyperglycemia” because they have a significant risk of progressing to diabetes.
So Back to Health Reform and CER – The Challenges Ahead
The challenges ahead are to make sure that we continue to utilize future discoveries in a timely and intelligent way. Which finally brings us to health reform and CER. Health reform that expands insurance coverage should dramatically improve the diagnosis and treatment of people with diabetes – which should also help control other healthcare and societal costs because poorly controlled diabetes leads to many other costly problems. However, immediate cost pressures present barriers to using the best diagnostic and therapeutic interventions.
Comparative effectiveness research is supposed to provide information about the best interventions, but as has been seen with advancements in diabetes, what is best often changes in progressive leaps based upon new discoveries. And one of the limitations of CER, (and all research for that matter), is that it takes time to do the work and analyze the results. Therefore, research really provides information about what was the best when the research started – which could have been several years before the results are known and disseminated. And this time lag effect can be even longer when the research is based upon previously published studies or analyses of clinical records.
The lesson here is that while CER and similar research can provide very important and useful information, it must be put into the proper historical and clinical contexts. What was state-of-the-art when the research protocols were developed may be 2, 3, 4 or more years out of date when the data is analyzed. This reality needs to be considered when such information is used for coverage and reimbursement, and decisions about health delivery and financing system redesign.
I am confident that most insurers are not paying for A1C tests to screen people for diabetes – and that it will likely take a year or more for even the most progressive insurers to do so…. but they eventually will. Which raises the question, what did they gain by waiting? And what did they, (and the patients), lose?
Addendum: The hospital lab my doctor uses charges $59 for a HbA1C test. So assuming that price doesn’t come down if more people are getting the test, the calculation needs to be made as to what is the ROI for using HbA1C as a screening test? And the CER questions are how to identify people who would most likely benefit from HbA1C screening, and how to determine how frequently the screening should be done?
Research the correlation between diabetes rates and the introduction of the semi dwarf strain of wheat that we are being poisoned with.