by Carl V Phillips
Sigh. We are supposed to be the honest and scientific ones in the tobacco wars. But we won’t be if we are not, well, scientific. Case in point are the criticisms of the recent paper with Glantz’s name on it that has been erroneously said to suggest that vaping doubles the risk of heart attack.
Incidentally, the meaningless statistic in the paper is a RR of 1.8, which is not double. Also, when the paper was originally written as a student class project (not by science students, mind you, but by medical students), that statistic was 1.4. That was when Glantz heard about it, managed to get the kids to put his name on the paper, and taught them how to better cook their numbers. That “contribution” has him being called the lead author.
The paper is junk science. So are most of the criticisms of it. If only someone with expertise in these methods had written a critique of it that people could look to. Oh, wait, here’s one in The Daily Vaper from February. That was based on a poster version of the paper, but as I noted in the article, “It has not yet appeared in a peer-reviewed journal, but it will, and the peer-review process will do nothing to correct the errors noted here.” I wish I could claim this was an impressive prediction, but it is about the same as predicting in February that the sun will rise in August.
You can go read that if you just want a quick criticism of the paper, and also look at the criticism on this page of some hilarious innumeracy Glantz piled on top of it. In the present post I am mostly criticizing the bad criticisms, though at the end I go into more depth about the flaws in the paper.
About half the critiques I have seen say something along the lines of “it was a cross-sectional study, and therefore it is impossible to know whether the heart attacks occurred before or after someone started vaping.” No. No no no no no. This is ludicrous.
Yes, the data was from a cross-sectional survey (the 2014 and 2016 waves of NHIS, mysteriously skipping 2015). And, yes, we do not know the relative timing (as discussed below). But “therefore it is impossible to know” (or other words along those lines)? Come on. A cross-sectional survey is perfectly capable of measuring the order of past events. Almost every single cross-sectional survey gives us a pretty good measure of, for example, whether someone’s political views were formed before or after the end of the Cold War. Wait! what kind of wizardry is this? How can such a thing be known if we do not have a cohort to follow? Oh, yeah, we ask them their age or what year they were born. Easy peasy.
Almost every statistic you see about average age of first doing something — a measure of the order in which events occurred (e.g., that currently more Americans become smokers after turning 18 than before, but most extant smokers started before they were 18) — is based on cross-sectional surveys that ask retrospective questions. It is perfectly easy to do a survey that asks heart attack victims the order in which events occurred. Indeed, any competent survey designed to investigate the relationship in question would ask current age, age of smoking initiation and quitting, age of vaping initiation and quitting, and age at the time of heart attack(s), ideally drilling down to whether smoking cessation was just before or just after the heart attack if they occurred the same year. We would then know a lot more than the mere order. But NHIS does not do that because, as I noted in the DV article, it is a mile wide and an inch deep. It is good for a lot of things, but useless for investigating this question. It can be used, as it was here, for a cute classroom exercise to show you learned how to run (not understand, but run) the statistical software from class. But only an idiot would think this paltry data was useful for estimating the effect.
(A variation on these “therefore it is impossible” claims is the assertion that because it is a cross-sectional study, it can only show correlation and not causation. I am so sick of debunking that particular bit of epistemic nonsense that I am not even going to bother with it here.)
So, we do not know the order of events. We can be confident that almost all the smokers or former smokers who had heart attacks smoked before that event. We do not know whether subjects quit smoking and/or started vaping before their heart attacks. Given that vaping was a relatively new thing at the time of the surveys, whereas heart attacks were not, it seems likely that most of the heart attacks among vapers occurred before they started vaping. This creates a lot of noise in the data.
A second, and seemingly more common, erroneous criticism of the analysis is that this noise has a predictable direction: “Smokers had heart attacks and then, desperate to quit smoking following that event, switched to vaping, thereby creating the association.” Again, no no no. Heart attacks do cause some smokers to become former smokers, but there is little reason to believe they are much more likely than other former smokers to have switched to vaping. Some people will have heart attacks and quit smoking unaided or using some other method. Indeed, I am pretty sure (not going to look it up, though because it is not crucial) that most living Americans who have ever had a heart attack experienced that event before vaping became a thing. So if they quit smoking as a result of the event, they did not switch to vaping. Also it seems plausible that the focusing event of a heart attack makes unaided quitting more likely than average, as well as making “getting completely clean” more appealing.
Of course, an analysis of whether behavior X causes event Y should not be based on data that includes many Y that occurred before X started. That much is obviously true. NHIS data is not even a little bit useful here, which is the major problem. There is so much noise from the heart attacks that happened before vaping this that the association in the data is utterly meaningless for assessing causation.
But there is no good reason to assume that this noise biases the result in a particular direction. If asked to guess the direction of the bias it creates, a priori, I probably would go in the other direction (less vaping among those who had heart attacks compared to other former smokers). The main reason we have to believe that the overall bias went in a particular direction is that the result shows an association that is not plausibly causal. We know the direction of the net bias. But this is not the same as saying we had an a priori reason to believe this particular bit of noise would create bias in a particular direction. When we see a tracking poll with results that are substantially out of line with previous results, it is reasonable to guess that random sampling error pushed the result in a particular direction. But we only conclude that based on the result; there was not an a priori reason to predict random sampling error would go in a particular direction.
Moreover, we do not have any reason to believe that the net bias was caused by this particular error, because it has a rather more obvious source (see below).
Sometimes we do have an a priori reason to predict the direction of bias caused by similar flaws in the data, as with the previous Glantz paper with an immortal person-time error (explained here, with a link back to my critique of the paper). If the medical students had engaged in a similar abuse of NHIS data to compare the risks of heart attack for current versus former smoking, then the direction of bias would be obvious: Heart attacks cause people to become former smokers, which would make former smoking look worse than it is compared to current smoking. I suspect that people who are making the error of assuming the direction of bias from the “Y before X” noise are invoking some vague intuition of this observation. They then mistranslate it into thinking that former smokers who had a heart attack are more likely to be vapers than other former smokers.
This brings up a serious flaw in the analysis that I did not have space to go into in my DV article: The analysis is not just of former smokers who vape, but includes people who both smoke and vape, as well as the small (though surprisingly large) number of never-smokers who vape. If vaping does cause heart attacks, it would almost certainly do so to a different degree in each of these three groups. For reasons I explored in the previous post, different combinations of behaviors have different effects on the risk of an outcome. Vaping probably is protective against heart attack in current smokers because they smoke less than they would on average. If a smoker vapes in addition to how much she would have smoked anyway, the increased risk from adding vaping to the smoking is almost certainly less than the (hypothesized) increased risk from vaping alone. Whatever it is about vaping that increases the risk (again, hypothetically), the smoking is already doing that. Thus any effect from adding vaping to smoking would be small compared to the effect from vaping compared to not using either product. Most likely the effect on current smokers would be nonexistent or even protective.
Indeed, this is so predictable that if you did a proper study of this topic (using data about heart attacks among vapers, rather than vaping among people who sometime in the past had a heart attack; also with a decent measure of smoking intensity — see below), and your results showed a substantial risk increase from vaping among current smokers, it would be a reason to dismiss whatever result appeared for former smokers. This is especially true if the estimated effect was substantial in comparison to the estimate for former- or never-smokers. If you stopped to think, you would realize that your instrument produced an implausible result, and thus it would be fairly stupid to believe it got everything else right. This is a key part of scientific hypothesis testing. Of course, such real science is not part of the public health research methodology. Nor is stopping to think.
It is a safe bet that the students who did this analysis understand none of that, having never studied how to do science and lacking subject-matter expertise. Glantz and the reviewers and editors of American Journal of Preventive Medicine neither understand nor care about using fatally flawed methods. So the analysis just “controls for” current and former smoking status as a covariate rather than separating out the different smoking groups as it clearly should. This embeds the unstated — and obviously false — assumption that the effect of vaping is the same for current, former, and never smokers. Indeed, because “the same” in this case means the same multiplicative effect, it actually assumes that the effect for current smokers is higher than that for former smokers (because their baseline risk is higher and this larger risk is being multiplied by the same factor).
Though they did not stratify the analysis properly, it is fairly apparent their results fail the hypothesis test. The estimate is driven by the majority of vapers in the sample who are current smokers, so they must have had a substantially greater history of heart attacks.
There is a good a priori reason to expect this upward bias, as I noted in the DV article, but it is not the reason voiced in most of the critiques. It is because historically vapers had smoked longer and more than the average ever-smoker. This is changing as vaping becomes a typical method for quitting smoking, or a normal way to cut down to having just a couple of real cigarettes per day as a treat, rather than a weird desperate attempt to quit smoking after every other method has failed. Eventually the former-smoking vaper population might look just like the average former-smoker population, with lots of people who smoked lightly for a few years and quit at age 25, and so on. But in the data that was used, the vapers undoubtedly smoked more than average and so were more likely to have a heart attack (before or after they started vaping).
Controlling for smoking using only “current, former, never” is never adequate if the exposure of interest is associated with smoking history and smoking causes the outcome, both of which are obviously true here. If there are no such associations then there is no reason to control for smoking, of course. Thus basically any time you see those variables in a model, you can be pretty sure there is some uncontrolled confounding due to unmeasured smoking intensity. In this case, you can be pretty sure that its effect is large and it biases the association upward.
In short, the results are clearly invalid. There are slam-dunk criticisms that make this clear. So let’s try to stick to those rather than offering criticisms that are as bad as the analysis itself. Ok?