September 2004 Issue | Robert P. Heaney, MD John A. Creighton University Professor

  • Knowledge Base
  • 2004
  • September 2004 Issue | Robert P. Heaney, MD John A. Creighton University Professor




Welcome to Functional Medicine Update for September 2004. We have a wonderful session in store in this issue—an interview with Dr. Robert Heaney, our Clinician/Researcher of the month. I am sure many of you have heard of his work. He has been a founding father and researcher in the area of osteoporosis/bone mineral metabolism for five decades. I think you will find the insight and contributions he makes in his interview fascinating. As well, it is very usable information from a clinical perspective.

Diclofenac Devastates South Asian Vulture Population
I would like to begin this month by talking about an interesting observation that is quite far afield from clinical medicine and nutrition topics traditionally addressed in FMU—the side effects of the increasing use of non-steroidal antiinflammatory drugs (NSAIDS) in veterinary medicine. This is one of those “aha” experiences because often we do not understand the global implications of what we do. We think in very myopic terms about localized effects of various decisions in the health sciences. We do not consider the broader contextual framework. This topic will broaden our perspective and open our minds to some of the ecological and biosphere influences that the decisions we make in medicine might have.

This information comes from a report three years ago on a mysterious ailment that was causing massive die-offs in the Asian subcontinent of three species of vultures, birds of prey.[1] At that time, the populations of two vulture species had dropped by 90 percent in India alone, in a period of less than 10 years. Large numbers of these stricken birds were seen with their heads and necks drooping down, later to drop from their perches and die, apparently of kidney failure. Since then, the situation has gotten worse. According to Peregrine Fund researchers, the Oriental White-backed Vulture, the Long-billed Vulture, and the Slender-billed Vulture have declined by 92 to 99 percent over the last decade. Less than a decade ago, these vultures numbered in the tens of thousands across India, Pakistan, and Nepal. The crash in their populations is a human as well as an environmental disaster, according to research that has been accomplished over the past year.

What role do these large birds play in the ecosystem? They are a free waste disposal system, quickly devouring dead cattle and other animals in the countryside, as well as in cities. For instance, a flock of vultures can reduce a full-grown cow to a pile of bones in an hour. They play an important role in managing waste in an agrarian society.

What is the cause of the rapid decline in the population of these birds of prey? It has been thought by the countries involved that it was some kind of infectious disease. They looked at specific types of viruses and bacteria that might induce disease. This year, the mystery seems to be solved, and it is not a consequence of a disease, but rather of chemical poisoning. It has to do with the recent use of the NSAID, diclofenac. In terms of human medications, its trade name is Voltaren. It is now being used in veterinary practices on large animals, particularly cows, for treatment of inflammation, fever, and lameness, because it is a very powerful antiinflammatory medication. Concern about the three declining species of birds led a variety of groups to investigate the source of their decline by doing autopsy and tissue sample research.

In the journal Nature in 2004, a fairly wide body of research was published which identified the veterinary use of diclofenac as the responsible agent for the devastating declines in the South Asian vulture populations.[2] This was a three-year study by The Peregrine Fund and Ornithological Society of Pakistan that found that 85 percent of 259 vultures examined had died of visceral gout, a condition caused by renal failure. Having eliminated the classic causes of renal failure, which includes viral or bacterial infectious disease, pesticides, poisons, heavy metals, or nutritional deficiency, the investigators tested the theory that vultures were encountering a toxin while feeding on livestock carcasses (their main food source). They found that just in the last few years diclofenac has been used as an NSAID in India, and that most of the animals found to be lame were being treated with it. This use was increasing the tissue levels of the drug in the animal carcasses.

Diclofenac is known to be toxic to kidneys in mammals, but in vultures, it appears to be extraordinarily toxic, and therefore a biomarker organism. Further investigation showed that diclofenac was fatal to vultures at 10 percent of the recommended mammal dose. Tissue residues in livestock treated at the labeled dose rate were sufficient to cause gout and death in vultures. This, coupled with the high incidence of visceral gout in wild vultures found dead in Pakistan, India, and Nepal, confirms that diclofenac is the primary cause of the Asian vulture decline.

This is interesting because diclofenac is widely used in human medicine, but was only introduced to the veterinary market in the Indian subcontinent in the early 1990s. In that period of time, we have seen the rapid decline in the vulture population. The drug is very inexpensive—less than $1 for a course of treatment in an animal—and it is widely used in the treatment of inflammation, pain, and fever in livestock.

This is a very interesting bit of second-tier information about how certain pharmaceutical agents, when they are introduced into the environment, may have broader implications than we think. That poses the question, if a vulture is a very sensitive organism, what about other living things that are less sensitive, but habitually exposed? Could this have other effects we have not yet looked at? Could this be the yellow canary, so to speak, as it relates to this issue?

We often think that when medications travel through our chain that they somehow end up being detoxified and eliminated and that their residues are never revisited. Now, there is more and more evidence to indicate that, starting with aquatic biota and going all the way up through, in this case, higher mammals, these medications may sometimes be commonly found in the biosphere where they can have deleterious effects.

After talking about NSAIDS, I would like to move to human medicine and discuss the evolving story of the benefit-to-risk ratio of the use of selective cyclooxygenase 2 inhibitors (COX-2) and non-specific NSAIDS in the management of chronic pain. These are some of the most commonly-employed medications, such as over-the-counter ibuprofen, and also include higher-potency prescription derivatives and the new selective COX-2 inhibitors.

People have been looking with greater intensity at the role of COX-2 and COX-1 in various tissue functions. There is differential function relating to these two enzymes that control the regulation of conversion of arachidonic acid into various 2-series prostanoids, based upon tissue activity. One cannot assume that the conversion of arachidonic acid into proinflammatory prostaglandins like PGE-2 occurs at the same rate in all tissues. In fact, there is differentiation between a platelet, a GI mucosal cell, a coronary artery cell, or a myocyte in the effects of selective COX-2 inhibitors. COX-1 and COX-2 have different activities. One is inducible (COX-2) and the other constitutive (COX-1). We call COX-1 a housekeeping enzyme, but COX-2 can play an important role, as well; for instance, in the endothelium, it appears to have a partly housekeeping function. Excessive suppression of its activity may have deleterious effects because its function is necessary for the health of the endothelium. In the endothelium, COX-2 produces the conversion of arachidonic acid into prostacyclin. Prostacyclin, when secreted by the endothelium, is an anti-platelet adhesion agent that balances against thromboxane A-2, which is produced by the platelet as a consequence of COX-2 conversion of arachidonic acid. There is a pro-platelet aggregation effect from the production of thromboxane A-2, and there is an anti-aggregation effect by the production of prostacyclin by the vascular endothelium. The balance of those two gives rise to proper clotting control. If there is a reduction in endothelial COX-2 production by excessive blocking, it has potential for shifting the equilibrium between clotting and non-clotting in an untoward way. That leads to a risk of thrombosis because thromboxane production precedes or exceeds that of prostacyclin. That has been the emerging model about some of the potential risks of excessively suppressing COX-2 in a non-specific way.

That leads us to a recent article in the Journal of the American Medical Association titled “A Polymorphism in the Cyclooxygenase 2 Gene as an Inherited Protective Factor Against Myocardial Infarction and Stroke.”[3] This is an interesting paper that picks up on the theme of endothelial COX-2 activity. In this study, investigators looked at myocardial infarction (MI) and ischemic stroke, thought to be caused by matrix degradation by metalloproteinases, leading to rupture of the atherosclerotic plaque, the so-called unstable plaque. Production of macrophage metalloproteinase is induced by prostaglandin E-2, which is due to COX-2 activity. The authors investigated the relationship between COX-2 polymorphisms and the risk of MI and stroke, specifically the 765G®C polymorphism of the COX-2 gene, and whether it had any relationship to clinically evident plaque rupture. The study took place between 2002 and 2003 with 864 patients having a first MI or atherothrombotic ischemic stroke, and 864 hospitalized control patients matched for age, sex, body mass index, smoking, hypertension, hypercholesterolemia, and diabetes. The 765G®C variant of the COX gene was genotyped. In this study, markers such as COX-2, MMP-2, and MMP-9 expression and activity in plaques and peripheral monocytes; urinary 6-keto PGF1a(marker of endothelial prostacyclin); and endothelium-dependent and -independent forearm blood flow vasodilation were also monitored. That would mean looking at some of the nitric oxide endothelial interrelationships. What did they find?

They found that the prevalence of this COX-2 polymorphism was 2.41 times higher among controls than among cases. These are individuals that had a decreased risk to MI and stroke apparently as a consequence of the COX-2 genotype. Although we are looking at a genetic variant, a polymorphism, some individuals may have higher sensitivity and higher risk than others.

However, we should not make the assumption that because a person has elevated COX-2 activity we want to suppress it in a way that is not specific throughout the whole of the body. This is what often happens with non-specific NSAIDS. If a person has difficulty with inflammation and pain, they may be taking a COX-inhibiting NSAID, only to find some untoward secondary side effect because of COX-2 activity in another tissue.

There are many further questions about the increasing prevalence of NSAID drug use, and even the selective COX-2 inhibitors that can be raised. Granted, there are differences among different selective COX-2 inhibitors in their influence on endothelium. I believe this is still an evolving story and that we should not yet jump to any conclusions.

Let me move from that to the evolving story of cardiovascular risk associated with metabolic syndrome (syndrome X), which is characterized by glucose intolerance, insulin resistance, and hyperinsulinemia. Metabolic syndrome is another extraordinarily interesting, evolving story that Dr. Gerald Reaven brought to our attention some 15 or 20 years ago, but it has picked up steam and yielded greater understanding over the past five years.

In looking at glucose metabolism and its relationship to coronary heart disease (CHD), we find that the glucose tolerance test (used as the sine quo non for determining difficulties in glucose removal and transport), may have some difficulties in its specificity of determining individuals who have dysinsulinism and increasing risk to CHD. Why do I say that? This is discussed in a recent paper in the Journal of the American Medical Association titled “Glucose Metabolism and Coronary Heart Disease in Patients With Normal Glucose Tolerance.”[4] There are several prospective studies that have shown a significant correlation between glucose metabolism and atherosclerosis in patients without diabetes, but differences in parameters of glucose metabolism among the various degrees of coronary artery disease (CAD) have not been specifically evaluated. In this paper, investigators were trying to find a marker that could clinically evaluate those individuals who had normal blood sugar levels, but who were at increasing risk to CHD as a consequence of dysinsulinism.

A cross sectional study was conducted in 234 men, mean age of about 56 years, with normal glucose tolerance and suspected CHD, who were admitted to a medical center for coronary angiography from Jan 1 through June 30, 2001. Glucose and metabolic factors were determined, as well as the extent of atherosclerosis by coronary angiography. The blood chemical factors to evaluate glucose metabolism included fasting and postload (postprandial) glucose and insulin (the glucose tolerance and insulin tolerance tests), glycosylated hemoglobin (HbA1c), and lipids, as well as insulin resistance measured by homeostasis model assessment method (HOMA). Patients were divided into four groups based on coronary angiography—no significant stenosis, 1-vessel disease, 2-vessel disease, and 3-vessel disease. Simple correlation analysis showed that the factors correlated with the extent of atherosclerosis were levels of postload glucose, postload insulin, and fasting insulin, as well as HOMA. Multiple stepwise regression analysis suggested that the factors independently associated with the number of stenosed coronary arteries were levels of postload plasma glucose, postload insulin, and fasting insulin, as well as HOMA.

For patients with normal glucose tolerance and different extents of atherosclerotic disease, postload glycemia and HbA1C levels were not equally distributed but were significantly higher in those with more severe disease, suggesting that the glycemic milieu correlates with the cardiovascular disease risk according to a linear model, with increasing dysinsulinism associated with increasing cardiovascular disease risk. It was graded in its effect. There was not only a grading effect of insulin and glucose as it relates to diabetes risk, but there was a gradient risk of insulin and glucose correlated to CHD and stroke risk.

These are very important variables when we are clinically evaluating patients, looking at their fasting blood sugar or even their two-hour postprandial blood glucose after an oral glucose load. We may be better off looking at postprandial insulin levels, as well as HbA1C. Recall that we used to feel that HbA1C was only useful for monitoring diabetics to follow their level of control, and anything above 8 or 9 percent of total hemoglobin as A1C would be clinically concerning. Now, we start to see a gradient effect of HbA1C, starting at about 5.5 percent and on up into the abnormal range (above 9 percent), with increasing relative risk to CAD and to glucose dysfunction and insulin signaling problems. The ways we have been assessing glucose tolerance historically have been limited in their specificity and sensitivity to determining risks to CAD that have to do with altered insulin signaling.

What does one do from a clinical perspective, once these conditions have been identified? In the early stages, with modest elevations of HbA1C, for instance in the 6.5 percent range, and there is some modest postprandial elevation of insulin after a 50 gram glucose load, that patient is at risk. He/she has a fasting triglycerides-to-HDL ratio that is, say 5.5 to 6, so that is also elevated—another indicator of metabolic syndrome. And let us say the patient has some degree of visceral adiposity—the waist-to-hip ratio is elevated; perhaps they have an increased waist measurement, which is another indicator of insulin resistance and hyperinsulinemia. Now, what do we do? The patient is probably not yet a candidate for medication, so we might first try diet and lifestyle intervention. One of the principal ways for modifying that individual’s diet is with a low-glycemic-index diet to try to lower the overall glycemic response of the diet, and time-release their carbohydrate and glucose to the bloodstream to better regulate the release and activity of insulin in peripheral cell stimulation of glucose uptake.

The glycemic index remains a very promising indicator. We were fortunate at the 11thInternational Symposium on Functional Medicine to have one of the co-discoverers of the glycemic index, Dr. David Jenkins, from the University of Toronto School of Medicine. Dr. Jenkins provided an eloquent presentation on the historical development and application of the glycemic index and its relationship to the high-complex carbohydrate/high-fiber diet for improving insulin sensitivity and glucose transport.

In some quarters, the glycemic index and the glycemic load are still quite controversial. Controversy regarding application of the glycemic index in the management of diabetes was recently re-ignited with the publication of a positive meta-analysis on low glycemic index interventions in diabetics in the journal, Diabetes Care, by Brand-Miller and co-workers, and the negative editorial it received from Franz, the past co-chair of the American Diabetes Association working group on nutrition recommendations.[5]

The controversy continues regarding the implications and integration of glycemic index and glycemic load concept into clinical nutrition and medicine. It has been heavily debated since 1981 when Dr. Jenkins and his co-workers first discussed it. The debate centers on the importance of carbohydrate quality versus quantity in medical nutrition therapy. Often, we go astray by talking just generically about carbohydrate, protein, and fat, rather than talking about each type. Carbohydrate can either be high glycemic index or low glycemic index, depending upon its composition, form, and physical characteristics. If it is a component of highly unrefined roughage, including grains, it is generally low glycemic index. If it is very highly purified starch, it can be higher glycemic index. For instance, a potato, once mashed or fried, becomes a source of higher glycemic index than a baked potato. Rather than just talking about carbohydrate, we need to talk about the glycemic index of specific foods, and the total glycemic load that carbohydrate contributes to the daily diet.

In a recent issue of the Journal of the American College of Nutrition, there was an interesting commentary on the clinical application of glycemic index.[6] This commentary examined various recommendations from different groups on the management of diabetes, glucose intolerance, and hyperinsulinemia, with regard to carbohydrate, fat, protein, and sugar intake. In none of the recommendations did they talk about glycemic index. The American Diabetes Association, for instance, has no recommendations about the glycemic index. Some other groups, such as the Canadian Diabetes Association, do recommend a lower glycemic index diet, although they do not specify how that is to be achieved. The European Association for the Study of Diabetes has a fairly strong position on the low glycemic index component, and they define it more rigorously as it relates to its inclusion in the dietary management of diabetics.

What are the arguments for and against the inclusion of the glycemic index? It seems to be a cut-and-dried thing to most of us, but then controversies can be continued for decades. What are the arguments for it? Glycemic index is a robust measurement. There is some variability in how food pertains to overall glycemic contribution to the diet versus its stand-alone glycemic index. That is why we often talk about the glycemic load of the total diet, not just the glycemic index of a food. Glycemic index is a physiological measurement. It therefore has some important values to correlate with the area under the curve of blood sugar after eating. Glycemic index of single foods has been shown to apply to mixed meals, and pooled glycemic index values of single foods are strongly correlated with the relative glycemic responses to mixed meals and can accurately predict the effects of mixed diets on glycemic control. Glycemic index is an easy concept to use, and can be employed by people who are not nutritional professionals. It has clinical utility because it correlates with HbA1C, fructosamine, fasting blood sugar, and fasting postprandial insulin levels. Those are all arguments for the glycemic index.

What are the arguments against the glycemic index? People might say there is too much variability in the glycemic index. They might say that its calculation ignores glucose values below the fasting baseline, which is based on measurement of postprandial glucose over only two to three hours. They might say there are interactions among carbohydrate and other food factors, such as protein, fat, fiber, and the food form processing and preparation that complicate the accurate predictions of the glycemic response because of the way the food has been consumed in conjunction with the total meal. Last, criticisms of the glycemic index have shown that people feel there have not been enough randomized, clinical control data to make definitive clinical recommendations about how to apply the glycemic index of a food and appropriate cut-offs for what is high and what is low.

My position, in light of all the literature, is in support of the glycemic load concept as being a good clinical indicator for doing diet evaluation and ultimate diet prescriptions for individuals with dysglycemia and dysinsulinism, and increasing relative risk to CAD and diabetes. From many papers published over the past few years, it is now recognized that this may be the best clinical tool to apply to the construction of diets that will lower glycemic load and for the reduction of HbA1C, reduction of postprandial insulin, and improved insulin signaling.

If we look at how carbohydrate connects to other factors in the diet—protein and fat, for instance—we talk about monounsaturates like oleic acid and polyunsaturates, and the oil component of the diet being much more favorable than long-chain fatty acids that are saturated in nature. We talk about the effects of vegetable proteins, particularly soy protein, on improvement of insulin sensitivity and insulin signaling. There is some obvious difference among different types of protein and how they influence insulin. We should not say that protein is non-insulinogenic. We have seen a number of papers published over the past few years that clearly identify that certain dietary protein sources, including even beef protein, at higher levels, will increase plasma glucose and plasma insulin levels. Therefore, we cannot say carbohydrate is uniquely glucogenic and protein is not. Again, it depends on the relative amount and type of protein, and the physiology of the individual.

We can say generally, however, that soy protein has a salutary or be

neficial effect on stabilizing insulin. It is possible that the isoflavones present in soy are participants in the signaling process through the effect they have on gene expression and protein tyrosine kinase activities, which may be interrelated with insulin signaling and the phosphatidylinositide pathway that modulates glucose transport. In a recent paper published in the Journal of the American College of Nutrition, it was indicated that dietary intake of soy protein and isoflavones was demonstrated to be associated with lowered CVD risk factors in high-risk, middle-age men in Scotland.[7] When the glycohemoglobin levels were examined for the men on the soy protein-enriched diets, they had lower glucose versus the baseline dietary run-in. I think we ought to be examining the type, as well as the magnitude of protein, the type and the magnitude of fat, and the type and magnitude of carbohydrate when doing dietary prescriptions, to establish total glycemic load for individuals with dysinsulinism.

Mild Hyperhomocysteinemia Induced by Diets Rich in Methionine or Deficient in Folate Promotes Early Atherosclerotic Inflammatory Processes

Let us not forget about the hyperhomocysteinemia that may result from high protein diets rich in the sulfur amino acids. That is something often overlooked as another potential indicator of vascular risk. When one is eating a very high protein diet, particularly an animal protein diet, which is rich in sulfur-containing amino acids, cysteine and methionine, one is activating the transsulfuration pathway which then engages the metabolism through homocysteine, leading to a need for proper homocysteine control. If one has certain types of polymorphisms or nutrient insufficiencies with B6, B12, or folic acid, it might result in poor metabolism of homocysteine. Increased homocysteine could be associated with increased vascular risk.

In animal studies, even mild hyperhomocysteinemia induced by feeding diets rich in methionine, or concomitantly, insufficient in folate, has been found to promote early atherosclerotic inflammatory processes. I am now citing from a recent paper published in the Journal of Nutrition.[8] That is obviously another variable that we need to take into account when looking at dietary prescriptions. Often, people are advised to increase their protein and decrease their carbohydrate. In fact, they may be told that carbohydrate is bad, to eliminate it, and fill in the gap with more protein. If that person is consuming more protein than the homocysteine cycle can manage, their homocysteine levels can go up and lead to an increased risk to heart disease by a non-lipid mechanism. It is a homocysteine mechanism.

I want to make sure we are all on the same page. We have a lipid hypothesis that relates to coronary atherosclerosis. We have a homocysteine hypothesis that relates to coronary atherosclerosis and its association with high protein diets and low folic acid status. We have a carbohydrate association that we have talked about with regard to hyperinsulinemia and the difficulties that it promotes in endothelial function—ADMA increases, alteration in endothelium nitric oxide production, and coronary injury through oxidative stress mechanisms. All of those are at play, and to try to take a simple-minded approach by suggesting that one increase protein and cut out carbohydrate, belies the range of important information that has been developed in the research and clinical field over the past decade. Again, it is a matter of balance—moderation, variety, color in food, minimally processed food, proper nutrient density with regard to vitamins and minerals, and not swinging the pendulum so far as to become a carnivore, not even looking at the source of the protein which may also have some secondary effects through high saturated fat intake.

Dietary Omega 3 Fatty Acids

We are talking about fats and I mentioned the monounsaturated oleic acid component. There is also the omega 3 polyunsaturated fatty acid component. There are many benefits of omega 3 fatty acids with regard to vascular function beyond those originally described in the literature. The first reports in the 1980s outlined the paradoxical situation of supplemental eicosapentaenoic acid (EPA) or fish oil supplements, resulting in lowered blood triglycerides. How does giving a triglyceride, a fat, lower a fat in the blood? It seems paradoxical. But over the ensuing 20 years, considerable research has identified the mechanism by which this can occur. It is now recognized that the omega 3 fatty acids have a remarkable nutrigenomic effect on modulating gene response elements associated with fatty acid metabolism, insulin sensitivity, and glucose transport. These are principally through things such as the peroxisome-proliferated activated receptor (PPAR) family where there are agonists for PPARgactivity. We have begun to understand that these fats have cell signaling capabilities. When we eat more omega 3 fatty acids, we are delivering dietary signals that modify gene expression patterns and cell physiological function.

Some of the other things now being observed for omega 3 fatty acids have to do with the stability of plaque. A study was published in the Lancet two years ago that we reviewed in a previous issue of FMU, indicating that individuals who supplement their diets with omega 3 fatty acids, even if they had plaque, were stable rather than unstable, which is associated with high saturated fat intake diets. That is another benefit.

A more recent report appeared in the Lancet titled “Immediate effects of n-3 fatty acid infusion on the induction of sustained ventricular tachycardia.”[9] This is an interesting paper. The investigators showed that increased consumption of omega 3 fatty acids reduced mortality from sudden coronary death, indicating that these fatty acids have antiarrhythmic effects. In this study, researchers did electrophysiological testing in 10 patients with implanted cardioverter defibrillators who were at high risk to sudden cardiac death. To assess their immediate effects on the induction of sustained ventricular tachycardia, n-3 fatty acids were infused. Such tachycardia was not induced in five of seven patients. The findings show that infusion of n-3 polyunsaturated fatty acids does not induce arrhythmia, but did result in a reduction of sustained ventricular tachycardia in some patients. That would suggest that n-3 fatty acids have some effect on cell pacing in the cardiocyte. In the editorial that follows this paper, the investigators state:

“Observational and trial data have accumulated to support the hypothesis that increased consumption of the long-chain n-3 polyunsaturated fatty acids found in fish, especially EPA and DPA acids, lower the risk of dying from coronary heart disease, and interest has focused on the antiarrhythmic properties of these fatty acids. In the late 1980s, McClennan et al. were the first to show anti-arrhythmic properties associated with these fatty acids in animal models. Billman et al. confirmed and expanded on these experiments in a dog model. Further experiments reported plausible cellular mechanisms for the anti-arrhythmic effects, including modulating of sodium, potassium, and calcium channels. N-3 fatty acids might also have favorable actions on heart rate variability, and therefore could be exerting anti-arrhythmic actions through effects on the autonomic nervous system.”[10]

In the DART Study, the Diet and Reinfarction Trial, published by Burr et al., just over 2000 men with a history of myocardial infarction (MI) were randomized to three dietary strategies (lowering saturated fat, increasing fiber, and increasing fatty-fish intake). There was a 29 percent reduction in total mortality in the participants who received advice to eat at least two portions of fatty fish a week, but no difference in total events for coronary heart disease because more non-fatal MIs occurred in the fish-advice group. Burr et al. suggested that fish consumption might reduce the risk of fatal arrhythmias, and therefore preferentially affect mortality after myocardial infarction.

“What are the implication of these findings? As has been shown with traditional anti-arrhythmic drugs, suppression of ventricular tachycardia during electrophysiological testing does not directly translate into a survival benefit when the same drugs are administered chronically. Therefore, the implications of these data on their own are limited.”10

However, this study, in conjunction with previous experimental data provide a possible mechanism to explain the preferential benefit seen with dietary intake of n-3 fatty acids on sudden cardiac death. Currently, there are three randomized trials on the effect of fish oil supplements on recurrent episodes of ventricular tachycardia and/or fibrillation in patients with implantable cardioverter defibrillators. Results of these trials will help us to understand more about the anti-arrhythmic properties of n-3 fatty acids.

Differential EPA Elevations and Altered Cardiovascular Disease Risk Factor Responses After Supplementation With DHA in Postmenopausal Women Receiving and Not Receiving Hormone Replacement Therapy

DHA has triacylglycerol-lowering potential, but it appears that it is less important than EPA in cardiac rhythm function. Differential EPA is associated with reduced cardiovascular disease risk and sudden coronary death. This was recently studied in postmenopausal women receiving and not receiving hormone replacement therapy (HRT).[11] Investigators studied the effects of supplementation with DHA (free of EPA) on the resulting elevation in EPA and on selected cardiovascular disease risk factors. In all women, DHA supplementation was associated with significant changes. DHA supplementation resulted in a 45 percent lower net increase in EPA and a 42 percent lower estimated percentage retroconversion of DHA to EPA in women receiving than in those not receiving HRT. It may be that giving combinations of EPA and DHA in a highly purified state, to get the benefits of both of these long-chain, omega 3 fatty acids, would be desirable. The doses we are talking about are in the range of 3 to 6 grams per days of the EPA/DHA combination, EPA appearing to have more of an antiarrhythmic effect.

The story continues and we have a lot more to learn about the omega 3 fatty acids. As we move toward glycemic diets and toward cardiovascular risk reduction, we need to make sure we are getting adequate levels of pure EPA/DHA through either dietary cold-water fish or dietary supplements.

It is time for our Clinician/Researcher of the Month interview.


Robert P. Heaney, MD
John A. Creighton University Professor
Creighton University
Omaha, NE 68178

JB: It’s time for our Clinician/Researcher of the Month. We are privileged this month to interview a professional whose work I have admired for the 30 years I have been in this field-Dr. Robert Heaney from Creighton University in Omaha, Nebraska. Dr. Heaney was Chairman of the Department of Medicine at Creighton University, as well as Head, Section of Endocrinology & Metabolism, and Vice President for Health Sciences. He is currently the John A. Creighton University Professor, a position he has held since 1984. His publication record is extraordinary. He has covered an entire horizon in his research, and has been one of the premier leaders in the area of bone mineral metabolism in helping us to understand the complicated interrelationship of diet, lifestyle, vitamin nutriture, and osteoporosis and bone status.

It is with great privilege that we welcome you to FMU, Dr. Heaney. Because your work has spanned so many years and so many different areas, I would like to ask what got you started down your path as a medical doctor in the area of clinical nutrition?

RH: I started studying the problem of osteoporosis. As I looked at the cases we had, I saw that I was studying the barn after the horse had gone. It dawned on me that we needed to look at what was going on in the physiology of women before they became osteoporotic, rather than after they were already in that condition. I cast my net broadly. I was fortunate enough to get long-running support from the National Institutes of Health (NIH), and looked at all kinds of things that were happening to women in mid-life, starting before menopause, and following them longitudinally for what has turned out to be 30+ years. What I saw was that calcium intake made a difference. I hadn’t originally believed that, but my own data convinced me that if you had a high calcium intake, you were more likely to be in bone balance, not losing bone. If you had a low calcium intake, you were more likely to be losing bone, and that you could change one into the other simply by improving people’s calcium intake.

I came into clinical nutrition through the back door. I started my medical life as an endocrinology researcher, looking at the issues surrounding bone metabolism and bone turnover, and why bone mass might go up or down. Most of the early bone research in medicine, from Fuller Albright on was done by endocrinologists, so it was a natural entry point into the topic area. Subsequently, I focused much more on clinical nutrition than I have on formal endocrinology in the past few years.

JB: Going back to some of your earlier work in the late 1950s and early 1960s, did you have any idea at that point how this field would evolve? Did you have a sense of the vitamin D metabolism connection and the thyroid/parathyroid, hepatic hydroxylation, and all the things that would open up in this field?
Measurement of 25-hydroxyvitamin D Levels As a Functional Status Indicator
RH: 25-hydroxyvitamin D hadn’t even been discovered then. That didn’t come on board until the late 1960s and early 1970s, if I recall correctly. Vitamin D was a great sleeper. Nobody knew anything about vitamin D. I had the privilege of serving on the Calcium and Related Nutrients Panel of the Food & Nutrition Board that released the DRIs for calcium and related nutrients in 1997. When we did the chapter on vitamin D in 1997, we knew almost nothing more than we had known in 1936. The one thing we could be certain of was that blood 25-hydroxyvitamin D, at that point, was the functional status indicator. This was how you could tell whether a patient had enough vitamin D, or not. We didn’t know in 1997, however, how much was enough. We just knew that if you had a higher blood 25-hydroxyvitamin D level, you would be more likely to be vitamin D replete, but we didn’t have a number that we could assign to where the cutoff level is between adequacy and deficiency.

There has been a tremendous amount of vitamin D work that has come on the scene in the last nine or ten years. For all practical purposes, we’ve learned more about vitamin D in the past few years than in all the rest of vitamin D’s history, going all the way back to the 19th century. We now have good reason to believe that at least three fourths, and maybe 90 percent, of the vitamin D our body uses every day has nothing to do with calcium metabolism whatsoever, or bone, for that matter. Vitamin D is involved in so many other tissues, so many other systems, and probably in many other diseases. We have simply been unaware of it because it took so long for the effects to become manifest that we couldn’t connect cause with effect.

Long- Versus Short-Latency Nutritional Deficiencies
JB: One of the more eloquent and important citations I have recently read, is your article in the American Journal of Clinical Nutrition, which discusses long-latency deficiency disease versus short-latency deficiency disease.[12] It’s an interesting concept for many clinicians that we probably haven’t fully explored, that being things that may show up soon as a consequence of insufficiency of a specific nutrient, and things that may take some period of time clinically to appear. Would you help us to understand your concepts of long- versus short-latency deficiencies?

RH: It’s useful, I think, to remember back about 100 years. Obviously, we can’t do this on personal memory, but at least we have history, so we know what’s going on. About 100 years ago, nutritional science was born. At that time, the prevailing conception in medicine was that all disease was caused by external invaders, either germs or toxins of some sort. The idea that not eating something could make you sick was absolutely foreign to medicine and to medical science. The early work with nutritional science really had to swim upstream, because it didn’t make sense to anybody that not eating something could make you sick. Eating something could make you sick if it poisoned you somehow, but not eating something? How could that be? The whole idea of essential nutrients didn’t exist 100 years ago.

Some of the earliest work was done by Christian Eijkman, at that time a young physician in the Dutch East India company in the Indies, confronting the problem of beri-beri in tropical workers. He noted that when he fed chickens the same rice that was fed to the plantation workers, they got a syndrome that was very much like human beri-beri. He also noted that if he fed rice polishings to the chickens, it seemed to heal or prevent the problem, and he fed that to the workers and they got over the beri-beri, too. That was really the first proper nutritional experiment where a food was associated with a disease, or the absence thereof. The paradigm of external invader was so powerful that when he published that work, he explained the finding, not in nutritional terms, but said that he figured the rice must carry a microbe with it, and that the rice polishings had a natural antidote to the microbe. If you just fed the polished rice to people, you were feeding them a germ that made them sick. And if you fed the rice polishings back with the polished rice, then you fed a natural antidote and they didn’t get sick anymore. And even that first nutritional experiment was misinterpreted. It was interpreted in view of the prevailing paradigm of disease caused by external invaders.

Because of EV McCollum and a number of other workers, particularly in this country, it was soon understood that there were essential constituents of food that were vital for health and for life. By the first decades of the 20th entury, we had begun to move into an era that saw the birth of nutrition. But the connection between cause (taking something out of the diet) and the development of illness, or symptoms, had to be short. We couldn’t have seen the connection otherwise. If it took 20 years to develop beri-beri, Eijkman never could have seen the effect, and we couldn’t have done it in humans. We didn’t have the conceptual models to work with that kind of a problem.

History of Vitamin D Research
Nutrition has been working for the last 100 years with the short-latency deficiency disease model. This is how it is with nutrition. The short-latency deficiency disease for vitamin D was rickets, or osteomalacia. The RDA for vitamin D is pegged to the amount you have to take in order to ensure that you’re not going to get rickets or osteomalacia. The field considered the deficiency disease for vitamin D was, as I say, rickets or osteomalacia. But, because of the implicit and unspoken assumption that there was one disease per nutrient, what happened if you didn’t get quite enough vitamin D, but didn’t have rickets? We coined a funny term for that one. We called it insufficiency. It was as if we said to ourselves, well you can’t be deficient because you don’t have rickets, but you obviously are not getting enough vitamin D to absorb all the calcium your body needs, so we’ll call it something else. We’ll call that insufficiency.

The literature over the last 10 or 15 years in the vitamin D field is filled with the term “insufficiency,” as contrasted with normalcy, which would be a blood vitamin D at a higher level. That’s the background to the short- and long-latency deficiency problem. The long-latency deficiency disease related to bone and calcium for vitamin D would be osteoporosis. I have to confess that when I was teaching this to medical students back in the 1960s and 1970s, I made a clear distinction. I said that vitamin D has to do with rickets; it has nothing to do with osteoporosis. And I’m sure I graded some papers wrong which said to the contrary, but it turns out I was the one who was wrong; the whole field was wrong.

The milder degrees of vitamin D deficiency actually produce osteoporosis; they don’t produce osteomalacia. Osteomalacia is a manifestation of the most severe degree of vitamin D deficiency, but before you get there, you go through a stage of osteoporosis-at least, most people do. The problem with the long-latency deficiency diseases is there is not a single cause for them. There are a lot of ways to get osteoporosis. There are a lot of ways to get cardiovascular disease. There are a lot of ways to get brain degeneration. Nutrition figures into some of those ways, and the real challenge confronting nutritional science in its second century of existence, is to try to figure out the right kinds of scientific approaches to unravel the role of nutrition in these chronic diseases. We run the risk of a pendulum swing, with too much enthusiasm and people thinking they can prevent all chronic disease, or that they can cure all chronic disease with megavitamin therapy. That’s clearly not right, and it leads to all kinds of mistakes and terrible problems. At the same time, we don’t want to err on the side of such extreme caution that we deprive people of some probably correct insights.

For instance, in our osteoporosis clinic here at Creighton, we give everybody 1000 IUs of vitamin D per day. We used to test them individually, but they were all low, and so we ended up supplementing everybody anyhow. We find that many of them may require 2000 IUs per day. That’s above the RDA figure for vitamin D, but we’ve been able to show that the RDA barely budges the blood 25-hydroxyvitamin D level. That’s just not enough for us to meet our respective needs. I’m not suggesting that all of our patients with osteoporosis are in that state because they haven’t had enough vitamin D. It’s just that we can’t treat them adequately if we don’t insure that they have adequate vitamin D.

JB: This is obviously a fascinating paradigm, meaning it’s a fundamental thought process shift. I know you have some thoughts about how one would go about experimentally answering some of these questions of the long-latency deficiencies, or insufficiencies. These are obviously methodologically more complicated and often, in the development of any field (as David Deusch talks about the evolution of medicine in his book, The Fabric of Reality), it is still pretty much in its early stages as a true science. It’s more of a medi-science, an observational science. How do we go about asking methodological questions about the longer-latency deficiency disorders?

RH: I’ll take a page from E.V. McCollum’s book. I mean “book” metaphorically; he didn’t actually write one. One hundred years ago, or less, when he was working as a young PhD scientist at the University of Wisconsin, he was concerned with the health of dairy cows, which is obviously a matter that Wisconsin would be interested in. He realized that dairy cows had a relatively long life span, and that he’d never find things out quickly enough if it took a while for them to develop, so he moved to laboratory rats. Laboratory rats didn’t exist back then; they had to be created, and he didn’t have access to any at the University of Wisconsin, so he set traps in barns and caught wild rats and started to work with them. But they were not sufficiently docile, and he ultimately found a pet supplier in Chicago who could give him some laboratory rats that were much easier to work with. And, of course, now we know that rats and mice have much shorter life spans. A surprising amount of what we know about nutrition has actually been learned in these laboratory animals, simply because you can compress so much of a lifespan within a reasonable period of the experimental life of the investigator. I’m not suggesting that is the full answer, but I think we have to think in those kinds of terms. How can we find models that play out their effects in a period of weeks or months, rather than years and decades? That’s step number one.

Individually, it’s going to depend upon the nutrients. For instance, in the paper that you referred to in the American Journal of Clinical Nutrition last fall, I speak specifically about calcium, phosphorus, and folic acid. But I could have chosen vitamin K; I could have chosen vitamin E. There are quite a number of other nutrients that have effects on multiple systems, where nutritional science has, nevertheless, tended to focus on a single one. What do we think of with vitamin K? We think of blood clotting. What do we think of with respect to folic acid? Well, the classical issue was megaloblastic anemia. Subsequently, we’ve learned that it’s very important for neural tube defects in early stages of embryonic development. But even then, nutritional science had trouble absorbing that. They figured that was a unique pharmacologic effect that didn’t really have anything to do with the regular role of folic acid in ordinary, everyday nutrition. We now know that’s wrong, but this is an example of how we had trouble assimilating this new information that didn’t fit in this one disease/one nutrient model system that had so trapped our thought processes.

JB: Let’s go back to an area in which you are a world expert, that being osteoporosis, and take a look at it as a disease, and as something that can be clinically defined. It’s related somehow to osteogenesis/osteolysis equilibrium such that one is sustaining a net loss of calcium and a net loss of protein in the framework of bone. What has been the changing thought about the dynamics that relate to that condition, given the long- versus short-latency concept? There is now data saying that loss of calcium from bone is not directly related to fracture rate, so there may be other mitigating factors that we need to pay very important attention to.

Excessive Bone Remodeling
RH: That’s exactly right. We now recognize that the vigorousness of the turnover, or remodeling, of bone may be a more important fragility factor than the amount of bone you have. Obviously, if you have very little bone, you’re going to have a flimsy skeleton, no matter what. Bone mass remains important, but a high rate of turnover also creates fragility in its own right. This is a very nice example of the fact that we have to reconceptualize this disease, in which decreased bone mass is actually a part of the name of the disorder itself. Osteoporosis, of course, means porous bones. The clear idea there was that you have moth-eaten, or inadequately mineralized, or incomplete bone. That was so intuitively obvious that it made very good sense and has had a century-long existence.

Now that we’re able to measure bone mass with some precision, we find that it does, in fact, predict fracture with some degree of accuracy. If you have low bone mass, you are much more likely to fracture than if you have high bone mass. But within any given mass category, some people are fracturing and some aren’t. We now recognize that the difference there is due to the fact that some have high remodeling rates, and others low remodeling rates. Probably the major reason the new drugs, the new bone-active agents work, is that they slow down excessive remodeling.

When you go on a bisphosphonate drug, such as alendronate or risedronate, for example, you begin to get a reduction in your fracture risk that begins essentially at day one of your taking the medication. And by three to six months, you’ve probably achieved the maximum benefit. Now, that doesn’t mean you shouldn’t keep taking the drug, because if you stop, then your fracture risk will go back up. But you haven’t changed bone mass very much in that short period of time, probably trivially, as a matter of fact. What’s really happened is that you’ve shifted the basic relationship between bone mass and fragility by cutting out the weak points where you’ve been constantly remodeling. Think about it as if you were remodeling the side of your house. You tend to do it a piece at a time. If you were to take the whole side out, the house might fall down, because you’d be taking out some essential structural supports. Well, too much remodeling in the skeleton is a source of great fragility for it. It’s important that we hold remodeling in check.

Treatment of Bone Loss
We now have reason to believe that the primitive human skeleton remodeling rate, under the conditions in which we evolved (high calcium intake, high vitamin D status, and a lot of exercise/physical activity), would have been one third or less of what we typically take for granted. We think it’s normal, but we’re probably all living on the precipice of osteoporotic fragility, simply because we all have such high remodeling rates. One of the challenges in terms of this long-latency deficiency disease is to try to figure out why the remodeling rate is so high, because that is relatively easy to fix. Bisphosphonates will reduce a postmenopausal woman’s remodeling rate down into a healthy premenopausal normal rate. We didn’t realize that was what it was doing. We invented the bisphosphonates to slow down bone loss, and they do that. But they do a lot more than that; we just didn’t know it at the outset.

JB: I’d like to propose a thought for your response. We are starting to see an increasing body of literature discussing remodeling rates as a manifestation, in part, of activation by signaling molecules like TNFa and other inflammatory mediators, suggesting that perhaps part of this problem is inflammatory-mediated. Then we get to the 1,25-hydroxyvitamin D question as an immune-modulating hormone. From your position of understanding, do you feel there is something about inflammation at the bone remodeling unit that participates in the high remodeling rate?

1-25-Dihydroxyvitamin D as An Immune Modulator
RH: Circumstantial evidence demonstrates that there is something there, but we don’t yet have proof of the concept that we’d like to have. It’s a hypothesis that’s worth testing. Yes, it needs to be looked at. With respect to the 1,25-hydroxyvitamin D as an immune modulator, the key point that we are just now beginning to understand, and that many people haven’t probably adequately grasped, is that the 1,25-hydroxyvitamin D that circulates in our blood is as a result of a high level of parathyroid hormone secretion. That’s necessary for calcium absorption and for bone health. That’s the classical vitamin D function that we’ve known about for years, although we continue to work out the details. But it’s the basic vitamin D/calcium bone health function.

Widespread Variation in the Absorption Rate of Different Calcium Supplements
What’s important with respect to immune modulation, multiple sclerosis (MS), and so many other disorders, is not the circulating level of 1,25-dihydroxyvitamin D, but the circulating level of 25-hydroxyvitamin D, the thing that the Food & Nutrition Board recognizes as the functional status indicator for vitamin D. How this plays a role is the following. If there is a high blood level of 25-hydroxyvitamin D, then the immune cells in the central nervous system-the epithelium of the prostate, breast, colon, and a whole host of other tissues that have vitamin D receptors-are able to see and work with a high blood level of 25-hydroxyvitamin D, which they use internally to make their own 1,25-hydroxyvitamin D. They make their own 1,25-dihydroxyvitamin D. They don’t depend on what’s circulating in the serum. And they probably make a lot more than they could get out of the serum under any circumstances. That’s why I say that probably three fourths to 90 percent of the vitamin D we use every day, is being used for other functions. But it’s being used through the mediation of 25-hydroxyvitamin D, which is the critical basis for all vitamin D function. That’s where vitamin D may be important, for instance, in the prevention of MS. There’s a very clear association between MS and serum 25-hydroxyvitamin D. If you have a high 25-hydroxyvitamin D level, you have less MS and vice versa. The same thing is true with prostate cancer.

JB: Once again, that certainly is a beautiful example of what you’re discussing-that it isn’t just a one nutrient/one disease connection. These are very important biomolecules that have pleotrophic effects, I guess we would say.

I’d like to go back to a couple of simple questions that come out of your extensive research. A lot of clinicians are interested in the work as it pertains to nutrition and the prevention of osteoporosis. Can we get into the amount of calcium, the form of calcium, the amount of vitamin D, and the form of vitamin D? There is a whole range of different calcium sources, with the highest percentage calcium being that of calcium carbonate (though it’s fairly insoluble), up to more soluble forms like calcium citrate. I know you’ve researched the different forms of calcium. Would you give us some insight as to the calcium/vitamin D composition and amount question?

Foods as the Primary Source of Calcium
RH: I’ve looked fairly extensively at various calcium sources. I’ve not done it exhaustively, simply because there isn’t any way to get financial support for a really exhaustive survey of the field. And I do think something of that sort needs to be done, because the FDA regulates calcium supplements as foods, and there aren’t any efficacy standards for foods. How do you tell how efficacious broccoli would be, for instance?

The consumer naturally thinks that two calcium sources are going to be the same, but I can tell you they are not. Solubility, it turns out, doesn’t make all that much difference. The absorption of calcium is typically from a neutral, or slightly alkaline, medium in the small intestine, anyway. Acid solubility is really not particularly important, and we’ve shown this. Different substances varying over seven orders of magnitude (10 million times more soluble than others), are not absorbed any better than one or another. The intestine is smart enough to know how to do something that we haven’t figured out how to do in a chemistry laboratory yet. Absorption is OK from calcium carbonate. Calcium phosphate has nearly the same calcium content as does calcium carbonate. That’s also a good source. But really, the best source is food. I have to stress this over and over again. Dairy calcium, for instance, is much more preferred over supplements. I realize that not everybody is going to get all the calcium they need from food sources, but they really have to start there.

Why do I say that? It’s not just a question of bioavailability. It’s that we need more nutrients than just calcium. We’ve shown, in three different cohorts of women, that people who have low calcium intakes, that is, less than two thirds of the recommended amount, which is a kind of working definition of “low,” tend to be low (less than two thirds of the recommended amount) in at least four other nutrients on top of calcium. If you take nine key nutrients in an ordinary diet, people who are low in calcium tend to be low in more than half of all the key nutrients. It’s hard to fix that with a pill. We really have to stress proper diet counseling with our patients, or get them connected with a nutrition professional who can knowledgeably help them find ways that work for them in their lives to do a better job of getting the food sources of calcium they ought to have.

Having said that, what supplements are best? We need to be savvy consumers, and that’s professional consumers, as well as patient consumers. We need to ask of the calcium suppliers that they have demonstrated bioavailability of their product. If they have, you can use it with confidence. If they haven’t, I’d steer clear of it. You can’t tell from looking at them whether they’re going to be well absorbed or not, and you can’t tell from their composition. I’ve studied probably a dozen different preparations of calcium carbonate. They all look pretty much the same on the outside. Some are absorbed twice as well as others; some 2 1/2 times as well as others. The consumer is naturally thinking that if it says 500 mg per tablet, that he/she is getting 500 mg. That’s not true. We just have to hold the manufacturers to a higher standard.

I’m currently working with some people at the National Osteoporosis Foundation (NOF) to see if, in view of not being able to get the FDA to create some standards, if perhaps voluntary health agencies such as NOF could do something about that, as well.

Vitamin D2 versus Vitamin D3
JB: How about the vitamin D question? Is there a difference clinically between ergosterol and cholecalciferol-vitamin D2 versus D3?

RH: We didn’t used to think so in humans. That’s why the units are the same. One hundred IUs of D2 is the same chemical quantity as 100 IUs of D3. But we’ve known that’s not true in experimental animals; they’re not equivalent. And for many reasons that I don’t fully understand, we just assumed that the two were equivalent in humans. We now know, when we measure the serum 25-hydroxyvitamin D response, that they’re not. I have a paper in press right now which shows that D3 may actually be up to 10 times more active than D2. The pharmaceutical preparation that’s out there, 50,000 IUs of D2, may be the equivalent of only 5 or 6,000 IUs of D3, so it’s nowhere near as much as it sounds in terms of what it will do for your body.

JB: We’ve made some interesting clinical observations over the years. When therapeutic doses of up to 50,000 IUs of D2 were given to some individuals with low 25-hydroxyvitamin D3 levels in their serum, it took some time for their 25-hydroxyvitamin D3 level to come up into normal range, suggesting that there’s either an absorption problem or a biotransformation problem. Has this been something you’ve seen clinically; that these repletions don’t occur rapidly?

RH: We see two things. One, even if you use pure D3 by mouth, and for experimental purposes, you can get it in essentially any size dose, but it’s not available pharmaceutically above 1000 IUs. If you use pure D3, it takes about five months to bring a normal person up to a new level where it becomes a steady state. Much of that is accomplished within the first couple of months, but you’re still rising, and it takes about five months to reach a new plateau. It’s a slow process, inherently. And that reflects the fact that the human body evolved in an equatorial environment where we got D all the time through the skin, and we were always having huge inputs. The rapidity of conversion of D3 made in the skin to 25-hydroxyvitamin D, which is what the tissues need to work with, didn’t really make any difference because it was always up there where it ought to be. But now, when we’re treating our patients with oral preparations of vitamin D, we need to understand that we won’t know what the new level is until we wait for several months.

There’s a second problem, however, with D2 and that is, some of the lab assays don’t pick up 25-D2. You may be giving your patient enough D in the form of D2 to change the basic physiology that you’re trying to fix, but you won’t see any change in the blood level because the lab assay misses it. What would you see that might be indicative? Let’s say you’re treating a patient with malabsorption syndrome, with clear signs of osteomalacia, symptoms/lab changes-low serum calcium, low serum phosphorus, high PTH, high alkaline phosphatase, those things. You give the vitamin D2, the ergosterol. The PTH will drop; the alkaline phosphatase will drop; the serum phosphorus will come up, but, depending on the lab that your hospital may be using, you may see no change at all in the serum 25-D level. That’s not very helpful for the clinician. That’s a lab analysis problem, and the clinical pathologists simply need to get their act together. They need to understand that clinicians now want to know vitamin D status. Let’s get a single method that works in all hospitals that produces a number that our patients can take with them when they go from provider to provider, or from system to system, that means the same thing, the same way that serum potassium, serum sodium, or a blood glucose would. We ought to have something we can rely on and not have to worry about what lab did on this one.

JB: I want to thank you very much. For our listeners, I wanted to make a parenthetical comment. You were very kind in going back historically and talking about some of the founding fathers of clinical nutrition including, obviously, Dr. E.V. McCollum. For those listeners who are not aware of this, Dr. Heaney was just granted what I think is the most prestigious award from the American Society of Clinical Nutrition in 2003-the E.V. McCollum Award. His award address was the one published in the American Journal of Clinical Nutrition that we have mentioned.

Dr. Heaney, congratulations on the acknowledgment of all your years of extraordinary contributions, and thank you so much for sharing this wisdom with the listeners of FMU.

RH: Thanks for your comment, and I’m pleased to be able to discuss these matters that I think are of great importance.

As Dr. Heaney mentioned in his discussion, the long and short-term latency disease concept can be applied to many conditions and many different substances, or nutrients. Low intakes of both calcium and vitamin D produce not only an index of disease, but also long-latency diseases that were previously unrecognized, such as osteoporosis. Similarly, with folic acid, we are starting to see that same theme develop. Recall that a number of years ago, we interviewed Dr. John Lindenbaum, a professor of neurology, now unfortunately deceased. He talked about the long-latency disorders of sub-clinical deficiencies of vitamin B12 and folate and their relationship to dementia and cognitive impairment in the elderly, even in the absence of pernicious anemia, megaloblastic anemia, or any other hematological sign. He recommended that the only way to determine that was with metabolite studies, looking at either homocysteine or methylmalonic acid.

Mechanisms of Homocysteine Toxicity on Connective Tissues: Implications for the Morbidity of Aging

Implications of the folic acid story have extended into the area of osteoporosis. Plasma homocysteine is associated with decreased folate and B12 status, and it has repeatedly been identified as a strong independent risk factor for cardiovascular disease. And more and more, we are starting to see it related to osteoporosis. For example, Krumdieck and Prince have called attention to the close parallels between the hallmark manifestations of homocystinuria, with serum homocysteine concentrations typically elevated in occlusive vascular disease, osteoporosis, mental deterioration, and counterpart manifestations of normal aging, with homocysteine concentrations between 10 and 100 mm/L.[13] That is also associated with increasing incidence of occlusive vascular disease, osteoporosis, dementia, and changes in vision in the aged population. To that extent, it appears that folate and B12 insufficiency might also be long-latency disorders that might present in mid- to late-age, as well.

For those of you who are long-time subscribers to FMU, you will recall that the interview with Dr. John Lindenbaum was in April 1995, some nine years ago. We discussed the concept of long-latency B12 and folate insufficiency and dementia, although he did not actually call it long latency. Dr. Heaney’s theme brings a nice definition to it.

Let us begin to look at these long-latency conditions and how they relate to clinical medicine. Nutritional scientists are becoming increasingly aware of the role of nutrients in reducing disease burden of several chronic diseases. However, I think we need to start looking at more than just the short-term deficiencies, but at the long-term deficiencies, as well. How has this been regulated and ultimately codified into dietary recommendations? It is, as Dr. Heaney points out in his article in the American Journal of Clinical Nutrition, difficult to understand or justify the resistance of regulatory authorities to changes in current practices that is typified by not taking these long-latency discoveries into account. Regulatory bodies obviously cannot respond to every shift in the winds of public opinion, so caution is recommended. Nevertheless, as Dr. Heaney points out, a middle ground should be found. The manifestation and appearance of these disorders present some difficulty because of the nature of conservancy in change. Yet, another aspect of the problem is the position of nutritional policy makers that “we won’t change without proof.” This is the old burden of the double-blind, placebo-controlled, randomized trial and adequate, irrefutable scientific justification for change. The irony of that position, which seems unassailable, was captured by Walter Willett in a recent interview with Gary Taubes. Dr. Willett stated: “They say, ‘You really need a high level of proof to change the recommendations,’ which is ironic, because they never had a high level of proof to set them.”[12] What is the standard in the burden of proof when we’re already dealing on shaky science from which the recommendations were originally established?

Dr. Heaney points out:

“The most difficult part of the challenge, I suspect, is finding the will to settle on nutrient intake recommendations that are biologically defensible while we wait for evidence that lower intakes may be safe or higher intakes more beneficial. In many instances, because the current recommendations are based on the prevention of the index of disease only, they can no longer be said to be biologically defensible. The preagricultural human diet, insofar as it can be reconstructed, may well be a better starting point for policy. Such a diet cannot be known in detail, but as several investigators have shown, the diet probably would have had at least the following features: high protein intake, low glycemic index, high calcium intake, high folic acid intake, an alkaline ash residue, and (for reasons of latitude and skin exposure) high vitamin D input. It is in this nutritional context that human physiology evolved, and it is to this context that human physiology is adapted. The burden of proof should fall on those who say that these more natural conditions are not needed and that lower intakes are safe.”[12]


1 Dickinson RJ. Asian vulture update. Living Bird. Spring 2004. Cornell Lab of Ornithology, pg. 5-6.

2 Oaks JL, Gilbert M, Virani MZ, et al. Diclofenac residues as the cause of vulture population decline in Pakistan. Nature. 2004;427(6975):630-633.

3 Cipollone F, Toniato E, Martinotti S, et al. A polymorphism in the cyclooxygenase 2 gene as an inherited protective factor against myocardial infarction and stroke. JAMA. 2004;291(18):2221-2228.

4 Sasso FC, Carbonara O, Nasti R, et al. Glucose metabolism and coronary heart disease in patients with normal glucose tolerance. JAMA. 2004;291(15):1857-1863.

5 Brand-Miller J, Hayne S, Petocz P, Colagiuri S. Low-glycemic index diets in the management of diabetes: a meta-analysis of randomized controller trials. Diabetes Care. 2003:26:2261-2267.

6 Sievenpiper JL, Vuksan V. Glycemic index in the treatment of diabetes: the debate continues. J Am Coll Nutr. 2004;23(1):1-4.

7 Sagara M, Kanda T, Jelekera MN, et al. Effects of dietary intake of soy protein and isoflavones on cardiovascular disease risk factors in high risk, middle-aged men in Scotland. J Am Coll Nutr. 2003;23(1):85-91.

8 Zhang R, Ma J, Xia M, Zhu H, Ling WH. Mild hyperhomocysteinemia induced by feeding rats diets rich in methionine or deficient in folate promotes early atherosclerotic inflammatory processes. J Nutr. 2004;134:825-830.

9 Schrepf R, Limmert T, Weber PC, Theisen K, Sellmayer A. Immediate effects of n-3 fatty acid infusion on the induction of sustained ventricular tachycardia. Lancet. 2004;363:1441-1442.

10 Albert C. Fish oil—an appetising alternative to anti-arrhythmic drugs? Lancet. 2004;363:1412-1413.

11 Stark KD, Holub BJ. Differential eicosapentaenoic acid elevations and altered cardiovascular disease risk factor responses after supplementation with docosahexaenoic acid in postmenopausal women receiving and not receiving hormone replacement therapy. Am J Clin Nutr. 2004;79:765-773.

12 Heaney RP. Long-latency deficiency disease: insights from calcium and vitamin D. Am J Clin Nutr. 2003;78(5):912-919.

13 Krumdieck CL, Prince CW. Mechanisms of homocysteine toxicity on connective tissues: implications for the morbidity of aging. J Nutr. 2000;130:365S-368S.

Related Articles