Monday, May 20, 2013
There are many published studies with evidence that cholesterol levels are positively associated with heart disease. In multivariate analyses the effects are usually small, but they are still there. On the other hand, there is also plenty of evidence that cholesterol is beneficial in terms of health. Here of course I am referring to the health of humans, not of the many parasites that benefit from disease.
For example, there is evidence () that cholesterol levels are negatively associated with mortality (i.e., higher cholesterol leading to lower mortality), and are positively associated with vitamin D production from skin exposure to sunlight ().
Most of the debris accumulated in atheromas are made up of macrophages, which are specialized cells that “eat” cell debris (ironically) and some pathogens. The drug market is still hot for cholesterol-lowering drugs, often presented in TV and Internet ads as effective tools to prevent formation of atheromas.
But what about macrophages? What about calcium, another big component of atheromas? If drugs were to target macrophages for atheroma prevention, drug users may experience major muscle wasting and problems with adaptive immunity, as macrophages play a key role in muscle repair and antibody formation. If drugs were to target calcium, users may experience osteoporosis.
So cholesterol is the target, because there is a “link” between cholesterol and atheroma formation. There is also a link between the number of house fires in a city and the amount of firefighting activity in the city, but we don’t see mayors announcing initiatives to reduce the number of firefighters in their cities to prevent house fires.
When we talk about variations in cholesterol, we usually mean variations in cholesterol carried by LDL particles. That is because LDL cholesterol seems to be very “sensitive” to a number of factors, including diet and disease, presenting quite a lot of sudden variation in response to changes in those factors.
LDL particles seem to be intimately involved with disease, but do not be so quick to conclude that they cause disease. Something so widespread and with so many functions in the human body could not be primarily an agent of disease that needs to be countered with statins. That makes no sense.
Looking at the totally of evidence linking cholesterol with health, it seems that cholesterol is extremely important for the human body, particularly when it is under attack. So the increases in LDL cholesterol associated with various diseases, notably heart disease, may not be because cholesterol is causing disease, but rather because cholesterol is being used to cope with disease.
LDL particles, and their content (including cholesterol), may be used by the body to cope with conditions that themselves cause heart disease, and end up being blamed in the process. The lipid hypothesis may be a classic case of reverse causation. A case in point is that of cholesterol responses to stress, particularly mental stress.
Grundy and Griffin () studied the effects of academic final examinations on serum cholesterol levels in 2 groups of medical students in the winter and spring semesters (see table below). During control periods, average cholesterol levels in the two groups were approximately 213 and 216 mg/dl. During the final examination periods, average cholesterol levels were 248 and 240 mg/dl. These measures were for winter and spring, respectively.
One could say that even the bigger increase from 213 to 248 is not that impressive in percentage terms, approximately 16 percent. However, HDL cholesterol does not go up significantly response to sustained (e.g., multi-day) stress, it actually goes down, so the increases reported can be safely assumed to be chiefly due to LDL cholesterol. For most people, LDL particles are the main carriers of cholesterol in the human body. Thus, in percentage terms, the increases in LDL cholesterol are about twice those reported for total cholesterol.
A 32-percent increase (16 x 2) in LDL cholesterol would not go unnoticed today. If one’s LDL cholesterol were to be normally 140 mg/dl, it would jump to 185 mg/dl with a 32-percent increase. It looks like the standard deviations were more than 30 in the study. (This is based on the standard errors reported, and assuming that the standard deviation equals the standard error multiplied by the square root of the sample size.) So we can guess that several people might go from 140 to 215 or more (this is LDL cholesterol, in mg/dl) in response to the stress from exams.
And the effects above were observed with young medical students, in response to the stress from exams. What about a middle-aged man or woman trying to cope with chronic mental stress for months or years, due to losing his or her job, while still having to provide for a family? Or someone who has just been promoted, and finds himself or herself overwhelmed with the new responsibilities?
Keep in mind that sustained dieting can be a major stressor for some people, particular when one gets to that point in the dieting process where he or she gets regularly into negative nitrogen balance (muscle loss). So you may have heard from people saying that, after months or years of successful dieting, their cholesterol levels are inexplicably going up. Well, this post provides one of many possible explanations for that.
The finding that cholesterol goes up with stress has been replicated many times. It has been known for a long time, with studies dating back to the 1950s. Wertlake and colleagues () observed an increase in average cholesterol levels from 214 to 238 (in mg/dl); also among medical students, in response to the mental and emotional stress of an examination week. A similar study to the one above.
Those enamored with the idea of standing up the whole day, thinking that this will make them healthy, should know that performing cognitively demanding tasks while standing up is a known stressor. It is often used in research where stress must be induced to create an experimental condition. Muldoon and colleagues () found that people performing a mental task while standing experienced an increase in serum cholesterol of approximately 22 points (in mg/dl).
What we are not adapted for is sitting down for long hours in very comfortable furniture (, ). But our anatomy clearly suggests adaptations for sitting down, particularly when engaging in activities that resemble tool-making, a hallmark of the human species. Among modern hunter-gatherers, tool-making is part of daily life, and typically it is much easier to accomplish sitting down than standing up.
Modern urbanites could be seen as engaging in activities that resemble tool-making when they produce things at work for internal or external customers, whether those things are tangible or intangible.
So, stress is associated with cholesterol levels, and particularly with LDL cholesterol levels. Diehard lipid hypothesis proponents may argue that this is how stress is associated with heart disease: stress increases cholesterol which increases heart disease. Others may argue that one of the reasons why LDL cholesterol levels are sometimes found to be associated with heart disease-related conditions, such as chronic stress, and other health conditions is that the body is using LDL cholesterol to cope with those conditions.
Specifically regarding mental stress, a third argument has been put forth by Patterson and colleagues, who claimed that stress-mediated variations in blood lipid concentrations are a secondary result of decreased plasma volume. The cause, in their interpretation, was unspecified – “vascular fluid shifts”. However, when you look at the numbers reported in their study, you still see a marked increase in LDL cholesterol, even controlling for plasma volume. And this is all in response to “10 minutes of mental arithmetic with harassment” ().
I tend to think that the view that cholesterol increases with stress because cholesterol is used by the body to cope with stress is the closest to the truth. Among other things, stress increases the body’s overall protein demand, and cholesterol is used in the synthesis of many proteins. This includes proteins used for signaling, also known as hormones.
Cholesterol also seems to be a diet marker, tending to go up in high fat diets. This is easier to explain. High fat diets increase the demand for bile production, as bile is used in the digestion of fat. Most of the cholesterol produced by the human body is used to make bile.
Monday, May 6, 2013
In September last year (2012) I went to South Korea to speak about nonlinear data analysis with WarpPLS (), initially for business and engineering faculty and students at Korea University in Seoul, and then as a keynote speaker at the HumanCom 2012 Conference () in Gwangju. Since Seoul is in the north part of the country, and Gwangju in the south, I had the opportunity to see quite a lot of the land and the people in this beautiful country.
(Korea University’s main entrance, Anam campus)
(In front of Korea University’s main Business School building)
Korea University is one of the most prestigious universities in South Korea. In the fields of business and engineering, it is arguably the most prestigious. It also has a solid international reputation, attracting a large number of highly qualified foreign students.
I wanted to take this opportunity and try to understand why obesity prevalence is so low in South Korea, which is a common characteristic among Southeast Asian countries, even though the caloric intake of South Koreans seems to be relatively high. Foods that are rich in carbohydrates, such as rice, are also high-calorie foods. At 4 calories per gram, carbohydrates are not as calorie-dense as fats (9 calories per gram), but they sure add up and can make one obese.
Based on my observations, explanations for the leanness that are too obvious or that focus on a particular dietary item (e.g., kimchi, green tea etc.) tend to miss the point.
Let us take for example a typical South Korean meal, like the one depicted in the photos below, which we had at a restaurant in Seoul. If you are a foreigner, this type of meal would be difficult to have without a local accompanying you, because it is not easy to make yourself understood in a traditional restaurant in South Korea speaking anything other than Korean.
(Main items of a traditional South Korean meal)
(You cook your own meal)
The meal started with thin-sliced meat (with some fat, but not much) and vegetables, with the obligatory side dishes, notably kimchi (). This part of the meal was low in calories and high in nutrients. Then we had two high-calorie low-nutrient items: noodles and rice. The rice was used in the end to soak up the broth left in the pot, so it ended adding to the nutrition value of the meal.
Because we started the meal with the low-calorie high-nutrient items, the meat and vegetables, our consumption of noodles and rice was not as high as if we had started the meal with those items. In a meal like this, a good chunk of calories would come from the carbohydrate-rich items. Still, it seems to me that we ingested plenty of calories, enough to make one fat over the long run, eating these types of meals regularly.
A side note. As I said here before, the caloric value of protein is less than the commonly listed 4 calories per gram, essentially because protein is a multi-purpose macronutrient.
In our meal, the way in which at least one of the carbohydrate-rich items was prepared possibly decreased its digestible carbohydrate content, and thus its calorie content, in a significant way. I am referring to the rice, which had been boiled, cooled and stored, way before it was re-heated and served. This likely turned some of its starch content into resistant starch (). Resistant starch is essentially treated by our digestive system as fiber.
Another factor to consider is the reduction in the glycemic load (not to be confused with glycemic index) of the rice. As I noted, the rice was used to soak up the broth from the pot. This soaking up process significantly reduces the rice’s glycemic load, because of a unique property of rice. It has an amazing capacity of absorbing liquid and swelling in the process.
This was one of several traditional Korean meals I had, and all of them followed a similar pattern in terms of the order in which the food items were consumed, and the way in which the carbohydrate-rich items were prepared. The order in which you eat foods affects your calorie intake because if you eat high nutrient-to-calorie ratio foods before, and leave the low nutrient-to-calorie ones for later, my experience is that you will eat less of the latter.
Another possible hidden reason for the low rate of obesity in South Korea is what seems to be a cultural resistance to industrialized foods, particularly among older generations; a sort of protective cultural inertia, if you will. Those foods are slowly being adopted – my visit left me with that impression – by not as quickly as in other countries. And there is overwhelming evidence that consumption of highly industrialized foods, especially those rich in refined carbohydrates and sugars, is a major cause obesity and a host of other problems.
Cultural resistance to, or cultural inertia against the adoption of, highly industrialized foods among pregnant mothers limits one’s exposure to those foods at a particularly critical time in one’s life – the 9-month gestation period in the mother’s womb. This could have a major impact on a person’s propensity to become obese or have other metabolic derangements later on in life. Some refer to this phenomenon as a classic example of modern epigenetics, whereby acquired traits appear to induce innate traits across generations.
Another reason I was excited about this trip to South Korea was my interest in table tennis. I wanted to know more about their table tennis “culture”, and how it was influenced by their general culture. China dominates modern table tennis, with such prodigies as Ma Lin, Ma Long, Wang Hao, Wang Liqin, and Zhang Jike. South Korea is not far behind; two of my all-time favorite South Korean players are Kim Taek-Soo and former Olympic champion Ryu Seung-Min.
Another side note. The best table tennis player of all time is arguably Jan-Ove Waldner (), from Sweden. I talked about him in my book on compensatory adaptation (). Waldner has been one of the few players outside China to be able to consistently beat the best Chinese players at times when they were at the top of the games, including Ma Lin ().
But, as I soon learned, as far as sports are concerned, it is not table tennis that most South Koreans are interested in these days. It is soccer.
A nice surprise during this trip was a tour in Gwangju in which we visited a studio that converted standard movies to stereoscopic three-dimensional ones (photo below). These folks were getting a lot of business, particularly from the USA, in a market that is very competitive.
(A standard-to-3D movie conversion studio in Gwangju)
Let’s get back to the health angle of the post. So there you have it, two possible “hidden” reasons for the low prevalence of obesity in South Korea, and maybe in other Southeast Asian countries. One is the way in which foods are prepared and consumed, and the other is cultural inertia. These are not very widely discussed, but future research may change that.
Monday, April 22, 2013
Andrew Weil, a major proponent of the idea of self-healing (), has repeatedly acknowledged the influence of osteopaths such as Robert C. Fulford () on him, particularly regarding his philosophy of health management. Self-healing is not about completely autonomous healing; it is about healing by stimulation of the body's self-repair processes, which in some cases can be achieved by simply reducing stress.
Interestingly, there are many reported cases of osteopaths curing people from various diseases by doing things like cranial manipulation and other forms of touching. We also have much evidence of health improvement through prescription of drugs that don’t appear to have any health benefits, which is arguably a similar phenomenon.
The number of such reported cases highlights what seems to be a reality about diseases in general, which is that they often have a psychosomatic basis. Their “cure” involves making the person affected believe that someone can cure him, a healer, with or without drugs. The healer then cures the person essentially by her power of suggestion.
Paleoanthropological evidence suggests that this healer-induced phenomenon has always been widespread among hunter-gatherer cultures, so much so that it may well have been the result of evolutionary pressures. If this is correct, how does it relate to health in our modern world?
I am very interested in hunter-gatherer cultures, and I have also been living in Texas for almost 10 years now. So it is only natural for me to try to learn more about the former hunter-gatherer groups in Texas, particularly those who lived in the area prior to the introduction of horses by the Europeans.
There are parks, museums, and other resources on the topic in various parts of Texas, which are at driving distance. Unfortunately much has been lost, as the Plains Indians of Texas (e.g., Comanches and Kiowas) who succeeded those pre-horse native groups have largely been forcibly relocated to reservations in Oklahoma.
Anthropological evidence suggests that the earliest migrations to America have occurred via the Bering Strait, initially from Siberia into Alaska, and then gradually spreading southward to most of the Americas between 13,000 and 10,000 years ago.
Much of what is known about the early Texas Indians is due to Álvar Núñez Cabeza de Vaca, a Spanish explorer who survived a shipwreck and lived among the Amerindians in and around Texas between 1528 and 1536. He later wrote a widely cited report about his experiences ().
(Cabeza de Vaca and his companions; source: Biography.com)
In Spanish, “cabeza de vaca” means, literally, “cow’s head”. This odd surname, Cabeza de Vaca, clearly had a flavor of nobility to it in Spain at the time.
You may have heard that early American Indians were uniformly of short stature, not unlike most people at the time, but certainly shorter than the average American today. Cabeza de Vaca dispels this idea with his description of the now extinct Karankawas, a description that has been born out by anthropological evidence. The male members “towered above the Spaniards”, often 6 ft or taller in height, in addition to being muscular.
The Karankawas were a distinct indigenous group that shared the same environment and similar food sources with other early groups of much lower stature. This strongly suggests a genetic basis for their high stature and muscular built, probably due to the “founder effect”, well known among population geneticists.
Cabeza de Vaca and three companions, two Spaniards and one Moroccan slave, were believed by the Amerindians to be powerful healers. This enabled them to survive among early Texas Indians for several years. Cabeza de Vaca and his colleagues at times acknowledged that they were probably curing people through what we would refer today as a powerful placebo effect.
Having said that, Cabeza de Vaca has also come to believe, at least to a certain extent, that he was indeed able to perform miraculous cures. He repeatedly stated his conviction that those cures were primarily through divine intervention, as he was a devout Christian, although there are many contradictory statements in this respect in his reports (possibly due to fear from the Spanish Inquisition). He also performed simple surgeries.
Much has been written about Cabeza de Vaca’s life among the early Indians of Texas and surrounding areas, including the report by Cabeza de Vaca himself. One of my favorites is the superb book “A Land So Strange” () by Andrés Reséndez, a professor of history at the University of California at Davis ().
The Spanish explorer’s experiences have been portrayed in the film “Cabeza de Vaca” (), which focuses primarily on the supernatural angle, with a lot of artistic license. I must admit that I was a bit disappointed with this film, as I expected it to show more about the early Indians’ culture and lifestyle. Juan Diego, the Spanish actor portraying Cabeza de Vaca, was razor thin in this film - a fairly realistic aspect of the portrayal.
It is quite possible that modern humans have an innate tendency to believe in and rely on the supernatural, a tendency that is the product of evolution. We know from early and more recent evidence from hunter-gatherer societies that supernatural beliefs help maintain group cohesion and, perhaps quite importantly, mitigate the impact that the knowledge of certain death has on the mental health of hunter-gatherers.
Homo sapiens is unique among animals in its awareness of its own mortality, which may be a byproduct of its also unique ability to make causal inferences. Supernatural beliefs among hunter-gatherers almost universally address this issue, by framing death as a threshold between this existence and the afterlife, essentially implying immortality.
Yet, supernatural beliefs seem to also have a history of exploitation, where they are used to manipulate others. Cabeza de Vaca himself implies that, at points, he and his companions took personal advantage of the beliefs in their healing powers by the various indigenous groups with which they came into contact.
Modern humans who are convinced that they have no supernatural beliefs often perceive that to be a major advantage. But there could be disadvantages. One is that they may have more difficulty dealing with psychosomatic disorders. The conscious knowledge that they are psychosomatic could possibly pale in comparison with the belief in supernatural healing, in terms of curative power. Another potential disadvantage is a greater likelihood of suffering from mental disorders.
Finally, those who are sure that they have no supernatural beliefs; are they really correct? Well, subconsciously things may be different. Perhaps a good test would be to go to a “convincing” movie (i.e., not a laughable “B-level” one; for lack of a better word) about supernatural things, such as possession or infestation by evil spirits, and see if it has any effect on you.
If the experience does have an effect on you, even a small one, couldn't this suggest that your subconscious belief in the supernatural may not be so easy to control in a conscious way? I suspect that having no supernatural beliefs is unnatural and unhealthy. In most cases it probably creates a conscious-subconscious conflict, and a fairly pessimist view of the world.
My guess is that it is better to have those beliefs, in some form or another, and be on guard against exploitation.
Monday, April 8, 2013
You can dry many types of meat, including beef, pork, goat, deer, and even some types of seafood, such as mussels. Drying meat tends to significantly increase the meat’s protein content per gram, often more than doubling it. It also helps preserve the meat, as bacteria need an aqueous environment to grow; adding salt helps further prevent bacterial growth.
Dried meat preparation and consumption was common among the Plains Indians (e.g., of the Cheyenne, Comanche, and Lakota tribes), and also a valuable trade item for them. They often ground the dried meat into a powder, mixing fat and berries with them; the result of which was pemmican. Many other hunter-gatherer cultures around the world have incorporated dried meat into their diets.
Below is a recipe for homemade beef jerky, which is very close in terms of nutrition content to the dried meat of the Plains Indians's time; that is, the time when the Plains Indians subsisted mostly on bison. Commercial beef jerky typically has a lower nutrient-to-calorie ratio, in part because sugar is added to it. The recipe is for beef jerky, but can be used to make jerky with bison meat as well.
- Cut about 3 lbs of beef muscle into thin strips (see photo below). Ideally you should buy it partially cut already, with most of the fat trimmed. Cutting with or against the grain doesn’t seem to make much difference, at least to me.
- Prepare some dry seasoning powder by mixing salt and cayenne pepper.
- Season the strips and place them on a tray with a grid on top, so that the fat that will come off the meat is captured by the tray and doesn’t drip into the oven.
- Preheat the oven to about 180 degrees Fahrenheit, and place the strips in it until you can easily pull a piece of the meat off with your fingers (see photos below, for an idea of how they would look). This should take about 1 hour or so. You will not technically be “baking” or "cooking" the meat at this temperature, although the digestibility of the final product will be comparable to that of cooked meat – i.e., greater digestibility than raw meat.
- Leave the strips in the oven until they are cold, this will dry them further.
Homemade beef jerky, prepared as above, is supposed to be eaten cold. In this sense, it could be thought of as a bit like salami, but with a higher protein-to-fat ratio. If your kids eat this on a regular basis, I suspect that their future orthodontist needs will be significantly reduced. Homemade beef jerky, like the commercial one, requires some serious chewing.
The dried strips of meat can be kept outside the fridge for a long time, but if you intend to keep them for more than a few weeks, I would suggest that you keep them in the fridge. Interestingly, adding sugar apparently increases the non-refrigerated shelf life of beef jerky even further. It doesn’t improve the flavor though, in my opinion.
This is a zero-carbohydrate food item, which may be a good choice for those who are insulin resistant or diabetic, and also for those on low-carbohydrate or just-enough-carbohydrate diets. Often I hear bodybuilders who eat multiple meals per day to say that it is hard for them to prepare high-protein snacks that they can easily carry with them. Well, beef jerky is one option.
Monday, March 25, 2013
Last year I traveled to South Korea to give presentations on nonlinear structural equation modeling and WarpPLS (). These are an advanced statistical analysis technique and related software tool, respectively, which have been used extensively in this blog to analyze health data, notably data related to the China Study.
I gave a couple of presentations at Korea University, which is in Seoul, and a keynote address at a conference in Gwangju, in the south part of the country. So I ended up seeing quite a lot of this beautiful country, and meeting many people. Some of my impressions regarding health and lifestyle issues need separate blog posts, which are forthcoming.
One issue that kept me thinking, as it did when I visited Japan a few years ago as well, was the obvious leanness of the South Koreans, compared with Americans, even though you don’t see a lot of emphasis on dieting there. Interestingly, this phenomenon also poses a challenge to many dietary schools of thought. For example, consumption of high-glycemic-index carbohydrates seems to be relatively high in South Korea.
The relative leanness of South Koreans is probably due to a combination of factors. A major one, it seems, is often forgotten. It is related to epigenetics. This term, “epigenetics”, is often assigned different meanings depending on the context in which it is used. Here it is used to refer to innate predispositions that don’t have a primarily genetic basis.
Epigenetic phenomena often give the impression that acquired characteristics can be inherited, and are frequently, and misguidedly, used as examples in support of a theory often associated with Jean-Baptiste Pierre Antoine de Monet, better known as Lamarck.
A classic example of epigenetics, in this context, is that of a mother with type II diabetes giving birth to a child that will develop type II diabetes at a young age. Typically type II diabetes develops in adults, but its incidence in children has been increasing lately, particularly in certain areas. And I think that this classic example is in part related to the general leanness of South Koreans and of people in other cultures where adoption of highly industrialized foods has been relatively slow.
In other words, I think that it is possible that a major protection in South Korea, as well as in Japan and other countries, is the cultural resistance, particularly among older generations, against adopting modern diets and lifestyles that deviate from their traditional ones.
This brings me to Drs. Francisco Cervantes and Marivic Torregosa (pictured below). Dr. Cervantes is the Chief Director of Laredo Pediatrics and Neonatology, a pediatrician who studied and practiced in a variety of places, including Mexico, New Jersey, and Texas. Dr. Torregosa is a colleague of mine, a college professor and nurse practitioner in Laredo, with a Ph.D. in nursing and a research interest in child obesity.
As it turns out, Laredo, a city in Southwestern Texas near the border with Mexico, seems like the opposite of South Korea in terms of health, and this may well be related to epigenetics. This presents an enormous opportunity for research, and for helping people who really need help.
In Laredo, as well as in other areas where insulin resistance and type II diabetes are rampant, there is a great deal of variation in health. There are very healthy folks in Laredo, and very sick ones. This great deal of variation is very useful in the identification of causative factors through advanced statistical analyses. Lack of variation tends to have the opposite effect, often “hiding” causative effects.
Drs. Cervantes, Torregosa, and I had a presentation accepted for the 2013 Ancestral Health Symposium (). It is titled “Gallbladder Disease in Children: Separating Myths from Facts”. It is entirely based on data collected and analyzed by Dr. Cervantes, who is very knowledgeable about statistics. Below is the abstract.
Cholesterol’s main role in the body is to serve as raw material for bile acids; the conversion of cholesterol to bile acids by the liver accounts for approximately 70 percent of the daily disposal of cholesterol. Bile acids are then stored in the gallbladder and secreted to aid in the digestion of dietary fat. It is often believed that high cholesterol levels cause gallbladder disease. In this presentation, we will discuss various aspects of gallbladder disease, with a focus on children. The presentation will be based on data from 2116 patients of the Laredo Pediatrics & Neonatology. The patients, 1041 boys and 1075 girls, are largely first generation American-born children of Hispanic descent; a group at very high risk of developing gallbladder disease. This presentation will dispel several myths, and lay out a case for a strong association between gallbladder disease and abnormally high body fat levels. Gallbladder disease appears to be largely preventable in children through diet and lifestyle modifications, some of which will be discussed during the presentation.
Many people seem to be unaware of the fact that cholesterol production and disposal are strongly associated with secretion of bile acids. Most of the body's cholesterol is used to produce bile acids, which are reabsorbed from the gut, in a cyclical process. This is the reason behind the use of "bile acid sequestrants" to reduce cholesterol levels.
The focus on gallbladder disease in the presentation comes from an interest by Dr. Cervantes, based on his many years of clinical experience, in using gallbladder disease markers to identify and prevent other conditions, including several conditions associated with what we refer to as diseases of affluence or civilization.
Dr. Cervantes is unique among clinical practitioners in that he spends a lot of time analyzing data from his patients. His knowledge of data analyses techniques rivals that of many professional researchers I know. And he does that at his own expense, something that most clinical practitioners are unwilling to do. Dr. Cervantes and I will be co-authoring blog posts here in the future.
Monday, March 11, 2013
A new study linking sugar consumption with diabetes prevalence has gained significant media attention recently. The study was published in February 2013 in the journal PLoS ONE (). The authors are Sanjay Basu, Paula Yoffe, Nancy Hills and Robert H. Lustig.
Among the claims made by the media is that “… sugar consumption — independent of obesity — is a major factor behind the recent global pandemic of type 2 diabetes” (). As it turns out, the effects revealed by the study seem to be very small, which may actually be a side effect of data aggregation; I will discuss this further below.
Fruits are exonerated
Let me start by saying that this study also included in the analysis the main natural source of sugar, fruit, as a competing variable (competing with the effects of sugar itself), and found it to be unrelated to diabetes. As the authors note: “None of the other food categories — including fiber-containing foods (pulses, nuts, vegetables, roots, tubers), fruits, meats, cereals, and oils — had a significant association with diabetes prevalence rates”.
This should not surprise anyone who has actually met and talked with Dr. Lustig, the senior author of the study and a very accessible man who has been reaching out to the public in a way that few in his position do. He is a clinician and senior researcher affiliated with a major university; public outreach, in the highly visible way that he does it, is probably something that he does primarily (if not solely) to help people. Dr. Lustig was at the 2012 Ancestral Health Symposium, and he told me, and anyone who asked him, that sugar in industrialized foods was his target, not sugar in fruits.
As I noted here before, the sugar combination of fruits, in their natural package, may in fact be health-promoting (). The natural package probably promotes enough satiety to prevent overconsumption.
Both (unnatural) sugar and obesity have effects, but they are tiny in this study
The Diabetes Report Card 2012 () provides a wealth of information that can be useful as a background for our discussion here.
In the USA, general diabetes prevalence varies depending on state, with some states having higher prevalence than others. The vast majority of diabetes cases are of type 2 diabetes, which is widely believed to be strongly associated with obesity.
In 2012, the diabetes prevalence among adults (aged 20 years or older) in Texas was 9.8 percent. This rate is relatively high compared to other states, although lower than in some. So, among a random group of 1,000 adult Texans, you would find approximately 98 with diabetes.
Prevalence increases with age. Among USA adults in general, prevalence of diabetes is 2.6 percent within ages 20–44, 11.7 percent within ages 45–64, and 18.9 percent at age 64 or older. So the numbers above for Texas, and prevalence in almost any population, are also a reflection of age distribution in the population.
According to the 2013 study published in PLoS ONE, a 1 percent increase in obesity prevalence is associated with a 0.081 percent increase in diabetes prevalence. This comes directly from the table below, fifth column on the right. That is the column for the model that includes all of the variables listed on the left.
We can translate the findings above in more meaningful terms by referring to hypothetical groups of 1,000 people. Let us say we have two groups of 1,000 people. In one of them we have 200 obese people (20 percent); and no obese person in the other. We would find only between 1 and 2 people with diabetes in the group with 200 obese people.
The authors also considered overweight prevalence as a cause of diabetes prevalence. A section of the table with the corresponding results in included below. They also found a significant effect, of smaller size than for obesity – which itself is a small effect.
The study also suggests that consumption of the sugar equivalent of a 12 oz. can of regular soft drink per person per day was associated with a 1.1 percent rise in diabetes prevalence. The effect here is about the same as that of a 1 percent increase in obesity.
That is, let us say we have two groups of 1,000 people. In one of them we have 200 people (20 percent) consuming one 12 oz. can of soft drink per day; and no one consuming sugar in the other. (Sugar from fruits is not considered here.) We would find only about 2 people with diabetes in the group with 200 sugary soda drinkers.
In other words, the effects revealed by this study are very small. They are so small that their corresponding effect sizes make them borderline irrelevant for predictions at the individual level. Based on this study, obesity and sugar consumption combined would account for no more than 5 out of each 100 cases of diabetes (a generous estimate, based on the results discussed above).
Even being weak, the effects revealed by this study are not irrelevant for policy-making, because policies tend to influence the behavior of very large numbers of people. For example, if the number of people that could be influenced by policies to curb consumption of refined sugar were 100 million, the number of cases of diabetes that could be prevented would be 200 thousand, notwithstanding the weak effects revealed by this study.
Why are the effects so small?
The effects in this study are based on data aggregated by country. When data is aggregated by population, the level of variation in the data is reduced; sometimes dramatically, a problem that is proportional to the level of aggregation (e.g., the problem is greater for country aggregation than for city aggregation).
Because there can be no association without correlation, and no correlation without variation, coefficients of association tend to be reduced when data aggregation occurs. This is, in my view, the real problem behind what statisticians often refer to, in “statospeech”, as “ecological fallacy”. The effects in aggregated data are weaker than the effects one would get without aggregation.
So, I suspect that the effects in this study, which are fairly weak at the level of aggregation used (the country level), reflect much stronger effects at the individual level of analysis.
Should you avoid getting obese? Should you avoid consuming industrialized products with added sugar? I think so, and I would still have recommended these without this study. There seems to be no problem with natural foods containing sugar, such as fruits.
This study shows evidence that sugar in industrialized foods is associated with diabetes, independently from obesity, but it does not provide evidence that obesity doesn’t matter. It shows that both matter, independently of one another, which is an interesting finding that backs up Dr. Lustig’s calls for policies to specifically curb refined sugar consumption.
Again, what the study refers to as sugar, as availability but implying consumption, seems to refer mostly to industrialized foods where sugar was added to make them more enticing. Fruit consumption was also included in the study, and found to have no significant effect on diabetes prevalence.
Here is a more interesting question. If a group of people have a predisposition toward developing diabetes, due to any reason (genetic, epigenetic, environmental), what would be the probability that they would develop diabetes if they became obese and/or consumed unnatural sugar-added foods?
This type of question can be answered with a moderating effects analysis, but as I noted here before (), moderating effects analyses are not conducted in health research.
Monday, February 25, 2013
Low testosterone (a.k.a. “low T”) is caused by worn out glands no longer able to secrete enough T, right? At least this seems to be the most prevalent theory today, a theory that reminds me a lot of the “tired pancreas” theory () of diabetes. I should note that this low T problem, as it is currently presented, is one that affects almost exclusively men, particularly middle-aged men, not women. This is so even though T plays an important role in women’s health.
There are many studies that show associations between T levels and all kinds of diseases in men. But here is a problem with hormones: often several hormones vary together and in a highly correlated fashion. If you rely on statistics to reach conclusions, you must use techniques that allow you to rule out confounders; otherwise you may easily reach wrong conclusions. Examples are multivariate techniques that are sensitive to Simpson’s paradox and nonlinear algorithms; both of which are employed, by the way, by modern software tools such as WarpPLS (). Unfortunately, these are rarely, if ever, used in health-related studies.
Many low T cases may actually be caused by something other than tired T-secretion glands, perhaps a hormone (or set of hormones) that suppress T production; a T “antagonist”. What would be a good candidate? The figure below shows two graphs. It is from a study by Starks and colleagues, published in the Journal of the International Society of Sports Nutrition in 2008 (). The study itself is not directly related to the main point that this post tries to make, but the figure is.
Look at the two graphs carefully. The one on the left is of blood cortisol levels. The one on the right is of blood testosterone levels. Ignore the variation within each graph. Just compare the two graphs and you will see one interesting thing – cortisol and testosterone levels are inversely related. This is a general pattern in connection with stress-induced cortisol elevations, repeating itself over and over again, whether the source of stress is mental (e.g., negative thoughts) or physical (e.g., intense exercise).
And the relationship between cortisol and testosterone is strong. Roughly speaking, an increase in cortisol levels, from about 20 to 40 μg/dl, appears to bring testosterone levels down from about 8 to 5 ηg/ml. A level of 8 ηg/ml (the same as 800 ηg/dl) is what is normally found in young men living in urban environments. A level of 5 ηg/ml is what is normally found in older men living in urban environments.
So, testosterone levels are practically brought down to almost half of what they were before by that variation in cortisol.
Chronic stress can easily bring your cortisol levels up to 40 μg/dl and keep them there. More serious pathological conditions, such as Cushing’s disease, can lead to sustained cortisol levels that are twice as high. There are many other things that can lead to chronically elevated cortisol levels. For instance, sustained calorie restriction raises cortisol levels, with a corresponding reduction in testosterone levels. As the authors of a study () of markers of semistarvation in healthy lean men note, grimly:
“…testosterone (T) approached castrate levels …”
The study highlights a few important phenomena that occur under stress conditions: (a) cortisol levels go up, and testosterone levels go down, in a highly correlated fashion (as mentioned earlier); and (b) it is very difficult to suppress cortisol levels without addressing the source of the stress. Even with testosterone administration, cortisol levels tend to be elevated.
Isn't possible that cortisol levels go up because testosterone levels go down - reverse causality? Possible, but unlikely. Evidence that testosterone administration may reduce cortisol levels, when it is found, tends to be rather weak or inconclusive. A good example is a study by Rubinow and colleagues (). Not only were their findings based on bivariate (or unadjusted) correlations, but also on a chance probability threshold that is twice the level usually employed in statistical analyses; the level usually employed is 5 percent.
Let us now briefly shift our attention to dieting. Dieting is the main source of calorie restriction in modern urban societies; an unnatural one, I should say, because it involves going hungry in the presence of food. Different people have different responses to dieting. Some responses are more extreme, others more mild. One main factor is how much body fat you want to lose (weight loss, as a main target, is a mistake); another is how low you expect body fat to get. Many men dream about six-pack abs, which usually require single-digit body fat percentages.
The type of transformation involving going from obese to lean is not “cost-free”, as your body doesn’t know that you are dieting. The body “sees” starvation, and responds accordingly.
Your body is a little bit like a computer. It does exactly what you “tell” it to do, but often not what you want it to do. In other words, it responds in relatively predictable ways to various diet and lifestyle changes, but not in the way that most of us want. This is what I call compensatory adaptation at work (). Our body often doesn’t respond in the way we expect either, because we don’t actually know how it adapts; this is especially true for long-term adaptations.
What initially feels like a burst of energy soon turns into something a bit more unpleasant. At first the unpleasantness takes the form of psychological phenomena, which were probably the “cheapest” for our bodies to employ in our evolutionary past. Feeling irritated is not as “expensive” a response as feeling physically weak, seriously distracted, nauseated etc. if you live in an environment where you don’t have the option of going to the grocery store to find fuel, and where there are many beings around that can easily kill you.
Soon the responses take the form of more nasty body sensations. Nearly all of those who go from obese to lean will experience some form of nasty response over time. The responses may be amplified by nutrient deficiencies. Obesity would have probably only been rarely, if ever, experienced by our Paleolithic ancestors. They would have never gotten obese in the first place. Going from obese to lean is as much a Neolithic novelty as becoming obese in the first place, although much less common.
And it seems that those who have a tendency toward mental disorders (e.g., generalized anxiety, manic-depression), even if at a subclinical level under non-dieting conditions, are the ones that suffer the most when calorie restriction is sustained over long periods of time. Most reports of serious starvation experiments (e.g., Roy Walford’s Biosphere 2 experiment) suggest the surfacing of mental disorders and even some cases of psychosis.
Emily Deans has a nice post () on starvation and mental health.
But you may ask: What if my low T problem is caused by aging; you just said that older males tend to have lower T? To which I would reply: Isn’t possible that the lower T levels normally associated with aging are in many cases a byproduct of higher stress hormone levels? Take a look at the figure below, from a study of age-related cortisol secretion by Zhao and colleagues ().
As you can see in the figure, cortisol levels tend to go up with age. And, interestingly, the range of variation seems very close to that in the earlier figure in this post, although I may be making a mistake in the conversion from nmol/l to ηg/ml. As cortisol levels go up, T levels should go down in response. There are outliers. Note the male outlier at the middle-bottom part, in his early seventies. He is represented by a filled circle, which refers to a disease-free male.
Dr. Arthur De Vany claims to have high T levels in his 70s. It is possible that he is like that outlier. If you check out De Vany’s writings, you’ll see his emphasis on leading a peaceful, stress-free, life (). If money, status, material things, health issues etc. are very important for you when you are young (most of us, a trend that seems to be increasing), chances are they are going to be a major source of stress as you age.
Think about individual property accumulation, as it is practiced in modern urban environments, and how unnatural and potentially stressful it is. Many people subconsciously view their property (e.g., a nice car, a bunch of shares in a publicly-traded company) as their extended phenotype. If that property is damaged or loses value, the subconscious mental state evoked is somewhat like that in response to a piece of their body being removed. This is potentially very stressful; a stress source that doesn’t go away easily. What we have here is very different from the types of stress that our Paleolithic ancestors faced.
So, what will happen if you take testosterone supplementation to solve your low T problem? If your problem is due to high levels of cortisol and other stress hormones (including some yet to be discovered), induced by stress, and your low T treatment is long-term, your body will adapt in a compensatory way. It will “sense” that T is now high, together with high levels of stress.
Whatever form long-term compensatory adaptation may take in this scenario, somehow the combination of high T and high stress doesn’t conjure up a very nice image. What comes to mind is a borderline insane person, possibly with good body composition, and with a lot of self-confidence – someone like the protagonist of the film American Psycho.
Again, will the high T levels, obtained through supplementation, suppress cortisol? It doesn’t seem to work that way, at least not in the long term. In fact, stress hormones seem to affect other hormones a lot more than other hormones affect them. The reason is probably that stress responses were very important in our evolutionary past, which would make any mechanism that could override them nonadaptive.
Today, stress hormones, while necessary for a number of metabolic processes (e.g., in intense exercise), often work against us. For example, serious conflict in our modern world is often solved via extensive writing (through legal avenues). Violence is regulated and/or institutionalized – e.g., military, law enforcement, some combat sports. Without these, society would break down, and many of us would join the afterlife sooner and more violently than we would like (see Pinker’s take on this topic: ).
Sir, the solution to your low T problem may actually be found elsewhere, namely in stress reduction. But careful, you run the risk of becoming a nice guy.