Monday, December 15, 2014

You can eat a lot during the Holiday Season and gain no body fat, as long as you also eat little


The evolutionary pressures placed by periods of famine shaped the physiology of most animals, including humans, toward a design that favors asymmetric food consumption. That is, most animals are “designed” to alternate between eating little and then a lot.

Often when people hear this argument they point out the obvious. There is no evidence that our ancestors were constantly starving. This is correct, but what these folks seem to forget is that evolution responds to events that alter reproductive success rates (), even if those events are rare.

If an event causes a significant amount of death but occurs only once every year, a population will still evolve traits in response to the event. Food scarcity is one such event.

Since evolution is blind to complexity, adaptations to food scarcity can take all shapes and forms, including counterintuitive ones. Complicating this picture is the fact that food does not only provide us with fuel, but also with the sources of important structural components, signaling elements (e.g., hormones), and process catalysts (e.g., enzymes).

In other words, we may have traits that are health-promoting under conditions of food scarcity, but those traits are only likely to benefit our health as long as food scarcity is relatively short-term. Not eating anything for 40 days would be lethal for most people.

By "eating little" I don’t mean necessarily fasting. Given the amounts of mucus and dead cells (from normal cell turnover) passing through the digestive tract, it is very likely that we’ll be always digesting something. So eating very little within a period of 10 hours sends the body a message that is similar to the message sent by eating nothing within the same period of 10 hours.

Most of the empirical research that I've reviewed suggests that eating very little within a period of, say, 10-20 hours and then eating to satisfaction in one single meal will elicit the following responses. Protein phosphorylation underlies many of them.

- Your body will hold on to its most important nutrient reserves when you eat little, using selective autophagy to generate energy (, ). This may have powerful health-promoting properties, including the effect of triggering anti-cancer mechanisms.

- Food will taste fantastic when you feast, to such an extent that this effect will be much stronger than that associated with any spice ().

- Nutrients will be allocated more effectively when you feast, leading to a lower net gain of body fat ().

- The caloric value of food will be decreased, with a 14 percent decrease being commonly found in the literature ().

- The feast will prevent your body from down-regulating your metabolism via subclinical hypothyroidism (), which often happens when the period in which one eats little extends beyond a certain threshold (e.g., more than one week).

- Your mood will be very cheerful when you feast, potentially improving social relationships. That is, if you don’t become too grouchy during the period in which you eat little.

Recently I was participating in a meeting that went from early morning to late afternoon. We had the option of taking a lunch break, or working through lunch and ending the meeting earlier. Not only was I the only person to even consider the second option, some people thought that the idea of skipping lunch was outrageous, with a few implying that they would have headaches and other problems.

When I said that I had had nothing for breakfast, a few thought that I was pushing my luck. One of my colleagues warned me that I might be damaging my health irreparably by doing those things. Well, maybe they were right on both grounds, who knows?

It is my belief that the vast majority of humans will do quite fine if they eat little or nothing for a period of 20 hours. The problem is that they need to be convinced first that they have nothing to worry about. Otherwise they may end up with a headache or worse, entirely due to psychological mechanisms ().

There is no need to eat beyond satiety when you feast. I’d recommend that you just eat to satiety, and don’t force yourself to eat more than that. If you avoid industrialized foods when you feast, that will be even better, because satiety will be achieved faster. One of the main characteristics of industrialized foods is that they promote unnatural overeating; congrats food engineers on a job well done!

If you are relatively lean, satiety will normally be achieved with less food than if you are not. Hunger intensity and duration tends to be generally associated with body weight. Except for dedicated bodybuilders and a few other athletes, body weight gain is much more strongly influenced by body fat gain than by muscle gain.

Monday, November 10, 2014

Can salmon be a rich source of calcium?


Removing the bones from cooked fish, before eating the flesh, is not only a waste of mineral nutrients. In some cases it can be difficult, and lead to a lot of waste of meat.

We know that many ancestral cultures employed slow-cooking techniques and tools, such as earth ovens (a.k.a. cooking pits; see ). Slow-cooking fish over a long time tends to soften the bones to the point that they can be eaten with the flesh.

The photo below shows the leftovers of a whole salmon that we cooked recently. We baked it with vegetables on a tray covered with aluminum foil. We set the oven at 300 degrees Fahrenheit, and baked the salmon for about 5 hours.



The end result is that we can eat the salmon, a rich source of omega-3 fat, with the bones. No need to remove anything. Just take a chunk, as you can see in the photo, and eat it whole.

It is a good idea to marinate the salmon for a few hours prior to baking it. This will create enough moisture to ensure that the salmon does not dry up during the baking process.

If you are a carnivore, you can make a significant contribution to sustainability by eating the whole animal, or as much of the animal as possible. This applies to fish, as I discussed here before (, , ).

Add eating less to this habit, and your health will benefit greatly.

Monday, October 13, 2014

Will the aluminum pan and foil give you Alzheimer’s?


Aluminum (or aluminium) is a silvery metal that is both ductile and light. It is abundant in nature. These characteristics make it a favorite in many industries. Food utensils, such as pans and pots, are often made of aluminum. This use is dwarfed by aluminum’s widespread use in the canning of foods and drinks (e.g., sodas and beers).

Based on a systematic literature review published in 2008, Ferreira et al. argued that there is credible evidence of an “association” between Alzheimer’s disease and aluminum intake (). This argument has been challenged by other researchers, but has nevertheless gained media attention. Positive and negative associations will always be found where there are nonzero correlations, but correlation does not guarantee causation.

A research report commissioned by the U.S. Environmental Protection Agency, authored by Krewski et al. and published in 2007, reviewed a number of studies on the health effects of aluminum (). Several interesting findings emerged from this extensive review of the literature.

For example, a targeted study published in the late 1980s and early 1990s suggested that the daily intake of aluminum of a 14-16 year old male in the U.S. was about 11.5 mg; the main sources being additives to the following refined foods: cornbread (36.6% of total intake), American processed cheese (17.2%), pancakes (9.0%), yellow cake with icing (8%), taco/tostada (3.5%), cheeseburger (2.7%), tea (2.0%); hamburger (1.8%), and fish sticks (1.5%).

The meat that goes into the manufacturing of industrial hamburgers is not a significant source of aluminum. The same goes for the fish in the fish sticks. It is the industrial refining that makes the above-mentioned foods non-negligible sources of aluminum. One could argue that processed cheese should not be called “cheese”, as it is far removed from “real” cheese in terms of nutrient composition – particularly aged raw milk cheese.

Aluminum-treated water is widely believed to be a major source of aluminum to the body, with the potential of leading to health-detrimental accumulation. It appears that this is a myth based on several of the studies reviewed by Krewski et al.

One study concluded that humans drinking aluminum-treated water over a period of 70 to 80 years would have a total accumulation of approximately 1.5 mg of aluminum in their brain (1 mg/kg, the average adult human brain weighs 1.5 kg). At the high end of normal levels, and not much compared to the 34 mg found in some of those exposed to the Camelford water pollution incident (). And here is something else to consider. The study made two unlikely assumptions for emphasis: that all the ingested aluminum was absorbed, and that those exposed suffered from a condition that entirely prevented excretion from excess ingested aluminum.

Krewski et al.’s report and virtually all empirical studies I reviewed for this post suggest that the intake of aluminum from cooking utensils is negligible.

Is aluminum intake via food additives, arguably one of the main sources for most people living in urban environments today, likely to cause neurological diseases such as Alzheimer's disease?

My review of the evidence left me with the impression that most of the studies suggesting that aluminum intake can lead to neurological diseases make causal mistakes. One representative example is Rifat et al.’s study published in 1990 in The Lancet ().

This old study is interesting because it looked at the effects of ingestion of finely ground aluminum between 1944 and 1977 by miners, where the aluminum was ingested because it was believed to be protective against silicotic lung disease (caused by inhalation of crystalline silica dust).

As a side note, I should say that the intake levels reported in Rifat et al.’s study seem lower than what one would expect to see from a modern diet of refined foods. This seems odd. The levels may have been underestimated by Rifat et al. Or, what is more worrying, they may be quite high in a modern diet of refined foods.

Having said that, Rifat et al.’s article reports “… no significant differences between exposed and non-exposed miners in reported diagnoses of neurological disorder …” However, the tables below from their article show significant differences between exposed and non-exposed miners in their performance in cognitive tests. Those exposed to aluminum performed worst.





Two major variables that one would expect Rifat et al. to have controlled for are age and lung disease. They did control for age and a few other factors, with the corresponding results indicated as “adjusted” in the tables. However, they did not control for lung disease – the very factor that motivated aluminum intake.

Lung disease is likely to limit the supply of oxygen to the brain, and thus cause cognitive problems in the short and long term. Therefore, the cognitive impairments suggested by Rifat et al.'s study may have been caused by lung disease, and not by exposure to aluminum. This type of problem is a common feature of studies of the health effects of aluminum.

Will cooking in aluminum pans and aluminum foils give you Alzheimer’s? I doubt it.

Monday, September 15, 2014

Will your wireless router give you cancer?


If you pick up a magnet and move it up and down with your hand, you will be creating electromagnetic radiation. The faster you move the magnet, the higher the frequency of the radiation you create. The higher the frequency of the radiation, the lower is its wavelength. High frequency is also associated with high radiation strength, where strength can be measured in watts (W).

We are constantly bombarded by electromagnetic radiation, which is usually classified based on its frequency (and also wavelength, since frequency and wavelength are inversely proportional). The main types of electromagnetic waves, in order of increasing frequency, are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, and gamma rays.

There has been a large amount of research on the health effects of wireless equipment, including wireless routers (figure below from Bestwirelessrouterreview.com), because of the electromagnetic radiation that they emit. Wireless equipment uses electromagnetic radiation of the radio waves type.



In developing countries, wireless routers are ubiquitous. They are found everywhere – at home, in hotels and businesses, and even in public parks. They allow wireless devices to connect to the Internet, by creating one or more “WiFi hotspots”.

The strength of the radiation emitted by wireless routers, when it reaches humans, is much lower than that emitted by mobile phones. One of the reasons for this is the lower strength of the radiation emitted by wireless routers, which can go from 30 to 500 milliwatts (mW); versus 125 mW to 2 W for mobile phones.

But the main reason for the lower strength of the radiation emitted by wireless routers, when it reaches humans, is that wireless routers normally are located farther away from humans than mobile phones. Radiation strength goes down according to the inverse-square law; i.e., proportionally to 1 divided by the distance between source and destination squared.

Given this, it has been estimated () that the exposure to 1 full year of radiation from a wireless router at home is equivalent, in terms of radiation reaching the body, to 20 minutes of exposure to the radiation emitted by a mobile phone.

If the radiation from wireless routers were to cause cancer, so should the radiation from mobile phones. So, what about mobile phones? Do they cause cancer?

In spite of a large amount of research conducted on the subject, no conclusive evidence has been found that the radiation from mobile phones causes cancer. A representative example of this research is a large Danish study (), whose results have recently been replicated.

Mobile phone radiation, like wireless router radiation, is currently classified by the International Agency for Research on Cancer (IARC) in Group 2B, namely “possibly carcinogenic”. This carries a recommendation of “more research”. Caffeic acid, found in coffee, is also in this group. It is useful to note that neither mobile phone nor wireless router radiation are classified in Group 2A, which is the “probably carcinogenic” IARC group.

When one considers the accumulated evidence regarding cancer risk associated with all types of electromagnetic radiation, the biggest concern by far is sunburn from ultraviolet radiation. The evidence suggests that it causes skin cancer. Chronic non-sunburn exposure to natural ultraviolet radiation, on the other hand, seems protective against most types of cancer (skin cancer included).

Will your wireless router give you cancer? I don’t think so.

Monday, August 11, 2014

Slow versus slow-brisk walking: Effects on type 2 diabetics


I am not a big fan of reviewing new studies published in refereed journals, particularly those that make it to the news. I prefer studies that have been published for a while, so that I can look at citations to them – both positive and negative.

But I am making an exception here to a study by Kristian Karstoft and colleagues (the senior author is diabetes researcher Thomas Solomon: ), accepted for publication on 30 June 2014 in the fairly targeted and selective journal Diabetologia (full text freely available in a .zip file at the time of this writing: ).

This is a small study. Individuals diagnosed with type 2 diabetes, and who were not being treated for the condition, were allocated to three groups: a control group (CON), an “interval” walking group (IWT), and a slow walking group (CWT).

The groups had 8, 12, and 12 people in them, respectively. Those people in the IWT group alternated between walking briskly and slowly for 1 hour five times a week. Those in the CWT group only walked slowly. Those in the CON group supposedly did not do any targeted exercise.

One of the interesting findings of this study was that there was no difference in terms of health effects between the CWT and the CON groups. The only group that benefited was the IWT group. That is, those who alternated between walking briskly and slowly benefited in a way that was observable from the exercise, but those who walked slowly did not.

This study highlights two facts that I have mentioned here before, but that are often overlooked by those who suffer from type 2 diabetes or are on their way to developing the condition. They refer to visceral fat and are listed below. Visceral fat accumulates around the abdominal organs ().

- Type 2 diabetes is strongly associated with visceral fat accumulation, and is somewhat unrelated to subcutaneous fat accumulation (see the case of sumo wrestlers: ).

- Visceral fat is very easy to burn via glycolytic exercise, but does not seem to respond well to non-glycolytic exercise.

Glycolytic exercise burns sugar stored in muscle, in the form of glycogen, while it is being performed. This form of exercise raises growth hormone levels acutely. Weight training and sprints are types of glycolytic exercise, which also takes other names, such as glycogen-depleting and anaerobic exercise.

Often one sees prediabetics and type 2 diabetics avoiding this type of exercise because it pushes their blood glucose levels through the roof. That happens, however, only during the exercise. After, the benefits are tremendous and appear to clearly outweigh the possible problems associated with the temporary exercise-induced hyperglycemia.

Take a look at the last line of this cropped version of Table 1 from the study, shown below. The relevant line for the point made above is the one that refers to visceral fat volume. As you can see, those in the IWT group had the greatest reduction in visceral fat. This was also the only statistically significant reduction among the three groups; according to an analysis of variance (ANOVA) test, the probability that it was due to chance was lower than one tenth of one percent.



The ANOVA test is "parametric", in the sense that it assumes that the data is normally distributed. However, the authors did not report conducting a test of normality. Also, the sample is very small. Given these, "non-parametric" tests, such as multiple one-group-two-conditions tests run with WarpPLS (link to specific page of the .pdf file of a relevant academic paper: ) would not only be more advisable but also provide more much more information to readers.

If you compare the line showing visceral fat with the other two above it, within the body composition section of the table, you will notice another interesting pattern. In the IWT group the changes in average total body mass and total fat mass were also the greatest, but the largest change in percentage terms was the one in average visceral fat mass. Visceral fat mass is often correlated with total fat mass, with this correlation being a function of how sedentary individuals are, and it does not take a lot of it to cause serious problems.

Sumo wrestlers tend to have large ratios of total to visceral fat mass. Virtually all of their body fat is subcutaneous. They also carry a lot of muscle mass. They achieve these through intense glycolytic exercise alternated with periods of rest and consumption of large amounts of calorie-dense food. To these they add another ingredient - exercise in the fasted state, usually in the morning prior to a large breakfast. Exercise in the fasted state seems particularly conducive to visceral fat mobilization.

By the way, sumo wrestlers consume enormous amounts of carbohydrates, but as noted by Karam () have "low visceral fat, absent hyperglycemia and absent dyslipidemia despite massive subcutaneous obesity".

In my opinion the folks in the study by Karstoft and colleagues would have benefited even more, possibly a lot more, if they had alternated between sprinting and regular walking.

Monday, July 28, 2014

What is “relative risk” (RR)? The case of alcohol frequency and its impact on mortality from stroke


This post is in response to an inquiry by Ivor (sorry for the delayed response). It refers to a recent study by Rantakömi and colleagues on the effect of alcohol consumption frequency on mortality from stroke (). The study followed men who consumed alcohol to different degrees, including no consumption at all, over a period of a little more than 20 years.

The study purportedly controlled for systolic blood pressure, smoking, body mass index, diabetes, socioeconomic status, and total amount of alcohol consumption. That is, its results are presented as holding regardless of those factors.

The main results were reported in terms of “relative risk” (RR) ratios. Here they are, quoted from the abstract:

“0.71 (95% CI, 0.30–1.68; P = 0.437) for men with alcohol consumption <0.5 times per week and 1.16 (95% CI, 0.54–2.50; P = 0.704) among men who consumed alcohol 0.5–2.5 times per week. Among men who consumed alcohol >2.5 times per week compared with nondrinkers, RR was 3.03 (95% CI, 1.19–7.72; P = 0.020).”

Note the P values reported within parentheses. They are the probabilities that the results are due to chance and thus “not real”, or not due to actual effects. By convention, P values equal to or lower than 0.05 are considered statistically significant. In consequence, P values greater than 0.05 are seen as referring to effects that cannot be unequivocally considered real.

This means that, of the results reported, only one seems to be due to a real effect, and that is the one that: “Among men who consumed alcohol >2.5 times per week compared with nondrinkers, RR was 3.03 …”

Why the authors report the statistically non-significant results as if they were noteworthy is unclear to me.

Before we go any further, let us look at what “relative risk” (RR) means. RR is given by the following ratio:

(Probability of an event when exposed) / (Probability of an event when not exposed)

In the study by Rantakömi and colleagues, the event is death from stroke. The exposure refers to alcohol consumption at a certain level, compared to no alcohol consumption (no exposure).

Now, let us go back to the result regarding consumption of alcohol more than 2.5 times per week. That result sounds ominous. It is helpful to keep in mind that the study by Rantakömi and colleagues followed a total of 2609 men with no history of stroke, of whom only 66 died from stroke.

Consider the following scenario. Let us say that 1 person in a group of 1,000 people who consumed no alcohol died from stroke. Let us also say that 3 people in a group of 1,000 people who consumed alcohol more than 2.5 times per week died from stroke. Given this, the RR would be: (3/1,000) / (1/1,000) = 3.

One could say, based on this, that: “Consuming alcohol more than 2.5 times per week increases the risk of dying from stroke by 200%”. Based on the RR, this is technically correct. It is rather misleading nevertheless.

If you think that increasing sample size may help ameliorate the problem, think again. The RR would be the same if it was 3 people versus 1 person in 1,000,000 (one million). With these numbers, the RR would be even less credible, in my view.

This makes the findings by Rantakömi and colleagues look a lot less ominous, don’t you think? This post is not really about the study by Rantakömi and colleagues. It is about the following question, which is in the title of this post: What is “relative risk” (RR)?

Quite frankly, given what one sees in RR-based studies, the answer is arguably not far from this:

RR is a ratio used in statistical analysis that makes minute effects look enormous; the effects in question would not normally be noticed by anyone in real life, and may be due to chance after all.

The reason I say that the effects “may be due to chance after all” is that when effects are such that 1 event in 1,000 would make a big difference, a researcher would have to control for practically everything in order to rule out confounders.

If one single individual with a genetic predisposition toward death from stroke falls into the group that consumes more alcohol, falling in that group entirely by chance (or due to group allocation bias), the RR-based results would be seriously distorted.

This highlights one main problem with epidemiological studies in general, where RR is a favorite ratio to be reported. The problem is that epidemiological studies in general refer to effects that are tiny.

One way to put results in context and present them more “honestly” would be to provide more information to readers, such as graphs showing data points and unstandardized scales, like the one below. This graph is from a previous post on latitude and cancer rates in the USA (), and has been generated with the software WarpPLS ().



This graph clearly shows that, while there seems to be an association between latitude and cancer rates in the USA, the total variation in cancer rates in the sample is only of around 3 in 1,000. This graph also shows outliers (e.g., Alaska), which call for additional explanations.

As for the issue of alcohol consumption frequency and mortality, I leave you with the results of a 2008 study by Breslow and Graubard, with more citations and published in a more targeted journal ():

“Average volume obscured effects of quantity alone and frequency alone, particularly for cardiovascular disease in men where quantity and frequency trended in opposite directions.”

In other words, alcohol consumption in terms of volume (quantity multiplied by frequency) appears to matter much more than quantity or frequency alone. We can state this even more simply: drinking two bottles of whiskey in one sitting, but only once every two weeks, is not going to be good for you.

In the end, providing more information to readers so that they can place the results in context is a matter of scientific honesty.

Monday, June 30, 2014

A case of a very large salivary stone


Salivary stones are the most common type of salivary gland disease. Having said that, they are very rare – less than 1 in 200 people will develop a symptomatic salivary stone. Usually they occur on one side of the mouth only. They seem to be more common in men than in women. Most of the evidence suggests that they are not strongly correlated with kidney stones, although some factors can increase both (e.g., dehydration).

Singh and Singh () discuss a case of a 55-year-old man who went to the Udaipur Dental Clinic with mild fever, pain, and swelling in the floor of the mouth. External examination, visually and through palpation, found no swelling or abnormal mass. The man’s oral hygiene was rather poor. The figures below show the extracted salivary stone, the stone perforating the base of the mouth prior to extraction, and an X-ray image of the stone.





I am not a big fan of X-ray tests in dental clinics, as they are usually done to convince patients to have dental decay treated in the conventional way – drilling and filling. Almost ten years ago, based on X-ray tests, I was told that I needed to treat some cavities urgently. I refused and instead completely changed my diet. Those cavities either reversed or never progressed. As the years passed, my dentist eventually became convinced that I had done the right thing, but told me that my case was very rare; unique in fact. Well, I know of a few cases like mine already. I believe that the main factors in my case were the elimination of unnatural foods (e.g., wheat-based foods), and consumption of a lot of raw-milk cheese.

However, as the case described here suggests, an X-ray test may be useful when a salivary stone is suspected.