My readers know that in my view, nutrition is everything when it comes to general health and wellness (well nearly everything). Globally there is population evidence suggesting that people who eat a diet emphasizing vegetables and fruits (and thus high density micronutrient/antioxidant food sources) have lower incidences of virtually any disease you wish to look for. We know that vitamins are essential to proper cellular function and energy metabolism- that’s why we call them essential vitamins. Without them, chemical cellular activities necessary for life cannot run.
We know also that eating protein throughout the day results in faster metabolism and lower central obesity. Clearly nutrition and lifestyle are critical to health. So why do the media and a large part of the medical community want to toss out conversations and studies looking at the role of nutrition and supplements on our health? I think it is because it is very hard to control for broad variables like multivitamins and food sources. Doctors are not well versed in nutrition- I recall being told dogmatically in medical school that “vitamins are a waste of money” with very little support of that dogmatic statement. This thinking hasn’t really changed. It is easy to attack things you are ignorant of.
Those that want to tout the benefits of supplements have plenty of water to carry as well. Scientists that have done vitamin studies do a poor job controlling for the supplements used. Good study design is critical for decent science. The supplement industry fails to do good studies and this is convenient for the industry (and equally convenient for the doctors to stay dogmatic). I say it is high time some folks put their money where their mouths are. Meanwhile we consumers have to do our best to eat optimally if we are interested in staying well. The decision to supplement is personal. Based upon the current literature no one should anticipate their insurance carrier paying for vitamins.
The media blitz that preceded two publications in the Annals of Internal Medicine (December 17th 2013 volume 159 Number 12) highlights how the medical community at large enjoys pooh-poohing supplementation of vitamins for diseases. In fact, the editorial provided in the same journal was titled: Enough is Enough: Stop Wasting Money on Vitamin and Mineral Supplements. Below I provide an analysis of the two trials from a doctor’s perspective.
Trial one is about using a multivitamin and mineral combination in patients with a history of heart attack. It was a component of another trial that also looked at using chelation (infusion of a molecule designed to bind heavy metals and charged ions) and so really it was a study within a study but looking at alternative therapies to lower the chances of additional heart attacks and other blood vessel related events (stroke leg artery occlusion etc). In a nutshell it is flawed based upon statistics and study design. Let me discuss further.
One thing that stands out right away is- how did they control for the vitamin? Basically we are told in the article what is in the vitamin. We aren’t told who made it. We aren’t told whether there was any quality analysis tests that proved that the vitamins advertised were actually in the caplet. Also, some theoretically important antioxidants were left out of the formula. Specifically no fish oil, no astaxanthin, no zeaxanthin, no CoQ10, no lutein. These excluded components have been proven to help certain conditions including sudden death incidence in acute heart attack patients (fish oil) and age-related Macular Degeneration (AREDs formula vitamins). In fact as I look at the table showing the ingredients, thiamine (important for carbohydrate metabolism) isn’t in the product; so was it really a multivitamin? One could argue not. From the get-go the very premise of a multivitamin trial comes into question.
Now note that the vitamin was given as a caplet. A caplet is a tablet designed not to be able to be altered. Since I started using the biophotonic scanner in my office I am observing that tablet products aren’t being optimally absorbed when measuring for tissue levels of the carotenoid family of vitamins such as those in the ARED type supplements. Capsules and liquid vitamins are absorbed much more easily by the body. If these components aren’t showing clear improvement using common over the counter tablets I would have to conclude perhaps many of the other components aren’t getting into the body as well. So are we studying placebo against placebo? Or more likely are we studying sub-optimal supplementation against placebo? This commentary about product issues illustrates a study design problem. As a scientist I think it is pretty important that you prove to your audience that you controlled and identified true ingredients in the product studied. It is common for good supplement companies to independently analyze their products for quantifiable confirmation of ingredients as advertised. No one in the study bothered to tell us what they did for quality analysis. This alone makes me want to toss this article in the trash. Yet all the major media outlets shouted “conclusions on high” about the outcome. So let’s press on.
Now let’s look a bit further into the design of this trial including statistics. In order for studies to be meaningful there need to be enough subjects in both the treatment arm (multivitamin) and the placebo group to be able to measure an attributable benefit to the intervention (again a multivitamin) group. The designers originally wanted to enroll 2372 people over 3 years assuming they could then detect a 25% difference with meaningful confidence. They further assumed that in the study period there would be 20% events in the placebo (untreated group).
So how did this work out in the study? They didn’t get the number of subjects they wanted- in fact they only recruited 1708 patients. Right away this study is suspect again. Normally this article goes in the trash but due to the media attention I must press on. They only followed these patients 55 months on average, which may not have been a long enough time period. Furthermore, they didn’t enroll the patients until 4.6 years after their prior heart attack. Perhaps the group that might have benefited more died in the interim. to be a true secondary prevention study the scientists would have enrolled the patients immediately following their heart attack. After 4.6 years after the event, all the patients were on multiple other drugs (all proven to lower heart attacks) and now we want to know if a quasi-multivitamin is helpful for secondary events? As a lipid specialist I know that cholesterol and other risk factors drive heart attacks and artery disease and that is where intervention priorities lie. Additionally, I am wondering whether multivitamins help prevent first events (primary prevention). Since oxidation of LDL cholesterol is what seems to create all the disease it stands to reason that a body optimal in antioxidants could help lower the incidence of heart attack. This is in fact is what the population studies on diet are implying.
Now I go to baseline characteristics of the groups in the study. Interestingly, half of each group is already taking a multivitamin or other vitamins and minerals. So we are statistically low in power due to insufficient recruitment of subjects and then come to learn half of the group already is taking a form of what we plan to study? What additional effect will we find here? Another interesting point is the vitamin group had what appears to be a significantly lower incidence of atrial fibrillation and valvular heart disease. These are both high risk associations for stroke, one outcome measured in the study of events. So it looks that by random chance (patients were randomly assigned to the groups) the intervention group may have to “overcome” some inherently higher risk for stroke than the placebo group. I find this noteworthy.
Next fact- nearly half(46%) of subjects dropped off therapy equally in placebo and the actual vitamin group.
Next, let’s look at the event rate. At the end of the trial period there were again half as many events in both the placebo group and the treatment group than what the assumed outcomes would be. Specifically when they designed how many to recruit (to power the ability to measure a difference) they expected 20% of the placebo group to have an event by studies end. In fact 13% had an event in the placebo group and 11% (lower but not statistically significant for a 25% difference of effect) in the treatment group. What this means to me is that due to inadequate initial study design there weren’t enough patients studied to begin with. Additionally since the incidence of events was half as frequent as anticipated we don’t know if the trend observed is actually meaningful. For instance if there really is a true 2% lower rate (actual incidence) in heart attack and vascular events for every 800 people place on a quasi-multivitamin and there were no differences in side effects or adverse events maybe a real multivitamin of high quality (independently confirmed active ingredients) and proven absorption (as measured by either blood, tissue or functional nutritional assays) would result in even a stronger benefit as well as a cost-effective way to prevent events, death, procedures and hospitalizations. I don’t know but certainly this study doesn’t toll a death knell for multivitamins’ potential role in atherosclerosis and oxidized LDL!
To highlight further that there appeared to be a trend that was in favor of supplementation (but recall not statistically significant based upon assumptions of outcome incidence that proved to be insufficient). I almost want to conclude this study was positive. The Kaplan-Meir curve below plotting events showed a separation between the groups that remained separated early on and throughout the trial.
When the curves don’t cross over time we usually attribute this to a real effect. The separation wasn’t far enough apart for the study conclusions but as I pointed out above there are several problems with the statistical design and assumptions. Going further look at figure 3 of the linked page and you will see a hazards ratio for various events. What is interesting is all but a few of the parameters fell towards the treatment group (meaning beneficial). I would submit that it is very possible that if they had recruited the right number of subjects what is now a trend (not statistically significant) in favor of benefit to quasi-multivitamin might have become statistically significant.
At the end of the day this study is flawed in numerous ways and quite frankly no meaningful conclusions can be drawn- except future studies if done on this topic need to use comprehensive and high-quality product and be sufficiently powered to reach real conclusions. My final parting comment on this article is that the other aspect of the trial (TACT Trial) looked at chelation and concluded a positive effect (although very small). Why is it the study designers looked positively towards a therapy that in the past was proven ineffective even though the separation was equally weak as this one? I don’t know.
Now let’s move to the other article which was about using multivitamin supplements and measuring cognitive decline. This article I feel was much better designed. It was a 12 year follow-up with an age range that should have shown some benefit if measurable because one would have expected a reasonable chance that the incidence of dementia (the worse form of cognitive decline) would have occurred high enough to see a difference. One problem though- they studied the brightest kids in the class. It is my view that this is not the group to study if you want to find clinically significant cognitive decline. This study was looking at male physicians. These folks (I am not saying this just because I am in the group <smile> ) are definitely above average in cognitive function. This study in retrospect is a set up for finding no effect. So we shouldn’t be surprised that this was their conclusion.
Recently there was a report on the dementia epidemic published in a recent New England Journal that is insightful. The article lends credence to why studying the brightest is a set up for “lack of effect”. Information being collected on the incidence of dementia is showing that when measuring the incidence of dementia over time we are seeing a significant drop in rates when looking between the years of 1982-2002. The researchers think their analysis suggest higher levels of education is one variable that consistently seems to lower the chances of dementia. I am sure there are other factors and they point out improved/lower rates of strokes are also linked to lower dementia rates. We started really addressing high blood pressure following the SHEP trial which was published in 1989. It has been demonstrated in prior data on brain imaging and reviews of dementia that while all of us take some “brain hits”/microvascular strokes as we age some of us seem less tolerant of those hits than others. This recent information suggests perhaps some of it is just how much we learned/used our stored memory. Maybe some of it is nutritional. When I looked at the study in the Annals the physicians had a much lower incidence of obesity as well as diabetes than the country average. Diabetes (seen in higher incidence in obesity) is another associated factor with higher rates of dementia. Again supplementing a group that had a low likelihood of cognitive decline that happened to be more nutritionally advantaged than the average is probably not ‘where the money’ is. My conclusion: This is not the group and therefore not the study to look towards for answering a question about dementia and nutrition/supplements. The likelihood of the group getting the illness is just too low and the group appears to be above average in nutrition making deficiencies or the improvement of nutritional deficiencies unlikely.
There you have it- a doctor’s perspective on two recent trials. The media should at least let the doctors have an opportunity to critique the literature before we have it judged by the public using grabbing headlines that haven’t had peer review or meaningful critical analysis of the information offered.