This post is in response to an inquiry by Ivor (sorry for the delayed response). It refers to a recent study by Rantakömi and colleagues on the effect of alcohol consumption frequency on mortality from stroke (). The study followed men who consumed alcohol to different degrees, including no consumption at all, over a period of a little more than 20 years.

The study purportedly controlled for systolic blood pressure, smoking, body mass index, diabetes, socioeconomic status, and total amount of alcohol consumption. That is, its results are presented as holding regardless of those factors.

The main results were reported in terms of “relative risk” (RR) ratios. Here they are, quoted from the abstract:

“0.71 (95% CI, 0.30–1.68; P = 0.437) for men with alcohol consumption <0.5 times per week and 1.16 (95% CI, 0.54–2.50; P = 0.704) among men who consumed alcohol 0.5–2.5 times per week. Among men who consumed alcohol >2.5 times per week compared with nondrinkers, RR was 3.03 (95% CI, 1.19–7.72; P = 0.020).”

Note the P values reported within parentheses. They are the probabilities that the results are due to chance and thus “not real”, or not due to actual effects. By convention, P values equal to or lower than 0.05 are considered statistically significant. In consequence, P values greater than 0.05 are seen as referring to effects that cannot be unequivocally considered real.

This means that, of the results reported, only one seems to be due to a real effect, and that is the one that: “Among men who consumed alcohol >2.5 times per week compared with nondrinkers, RR was 3.03 …”

Why the authors report the statistically non-significant results as if they were noteworthy is unclear to me.

Before we go any further, let us look at what “relative risk” (RR) means. RR is given by the following ratio:

(Probability of an event when exposed) / (Probability of an event when not exposed)

In the study by Rantakömi and colleagues, the event is death from stroke. The exposure refers to alcohol consumption at a certain level, compared to no alcohol consumption (no exposure).

Now, let us go back to the result regarding consumption of alcohol more than 2.5 times per week. That result sounds ominous. It is helpful to keep in mind that the study by Rantakömi and colleagues followed a total of 2609 men with no history of stroke, of whom only 66 died from stroke.

Consider the following scenario. Let us say that 1 person in a group of 1,000 people who consumed no alcohol died from stroke. Let us also say that 3 people in a group of 1,000 people who consumed alcohol more than 2.5 times per week died from stroke. Given this, the RR would be: (3/1,000) / (1/1,000) = 3.

One could say, based on this, that: “Consuming alcohol more than 2.5 times per week increases the risk of dying from stroke by 200%”. Based on the RR, this is technically correct. It is rather misleading nevertheless.

If you think that increasing sample size may help ameliorate the problem, think again. The RR would be the same if it was 3 people versus 1 person in 1,000,000 (one million). With these numbers, the RR would be even less credible, in my view.

This makes the findings by Rantakömi and colleagues look a lot less ominous, don’t you think? This post is not really about the study by Rantakömi and colleagues. It is about the following question, which is in the title of this post: What is “relative risk” (RR)?

Quite frankly, given what one sees in RR-based studies, the answer is arguably not far from this:

**RR is a ratio used in statistical analysis that makes minute effects look enormous; the effects in question would not normally be noticed by anyone in real life, and may be due to chance after all.**

The reason I say that the effects “may be due to chance after all” is that when effects are such that 1 event in 1,000 would make a big difference, a researcher would have to control for practically everything in order to rule out confounders.

If one single individual with a genetic predisposition toward death from stroke falls into the group that consumes more alcohol, falling in that group entirely by chance (or due to group allocation bias), the RR-based results would be seriously distorted.

This highlights one main problem with epidemiological studies in general, where RR is a favorite ratio to be reported. The problem is that epidemiological studies in general refer to effects that are tiny.

One way to put results in context and present them more “honestly” would be to provide more information to readers, such as graphs showing data points and unstandardized scales, like the one below. This graph is from a previous post on latitude and cancer rates in the USA (), and has been generated with the software WarpPLS ().

This graph clearly shows that, while there seems to be an association between latitude and cancer rates in the USA, the total variation in cancer rates in the sample is only of around 3 in 1,000. This graph also shows outliers (e.g., Alaska), which call for additional explanations.

As for the issue of alcohol consumption frequency and mortality, I leave you with the results of a 2008 study by Breslow and Graubard, with more citations and published in a more targeted journal ():

“Average volume obscured effects of quantity alone and frequency alone, particularly for cardiovascular disease in men where quantity and frequency trended in opposite directions.”

In other words, alcohol consumption in terms of volume (quantity multiplied by frequency) appears to matter much more than quantity or frequency alone. We can state this even more simply: drinking two bottles of whiskey in one sitting, but only once every two weeks, is not going to be good for you.

In the end, providing more information to readers so that they can place the results in context is a matter of scientific honesty.