by Carl V Phillips
I will interrupt my series on the failures of peer review to look at a great example of the failures of peer review, a new broadside (I hesitate to call it a study) from the CDC that appears in Nicotine and Tobacco Research, an alarmist piece about kids’ reported use of e-cigarettes. Here is the official abstract at the paywalled journal page. And here is a bootleg copy of the full manuscript (the US government does not let you hide your work behind paywalls if it comes from taxpayer-funded grants, and so I am not going to let them do it when we are paying for it directly).
I will start with their main conclusion:
Enhanced prevention efforts for youth are important for all forms of tobacco, including e-cigarettes.
Just in case it was not clear that this paper is a thinly veiled political broadside, they pretty much announce it via a conclusion that does not follow from the content of the analysis, which is entirely about usage statistics and “intentions” (of course, I cover this below). There is nothing whatsoever in the analysis about prevention efforts. There is no basis for implying they accomplish anything, no basis for claiming they do more good than harm, that they are cost-effective, etc. The statistics presented (if interpreted honestly) could provide one input into such an analysis (though just of the “including e-cigarettes” bit and obviously not the rest of the sentence), but clearly do not constitute such an analysis on their own. And, yet, there it is, at the end of the abstract (quoted), as well as in the paper and, of course, even more aggressively paraphrased in their breathless press release.
Funny how the peer reviewers and editors did not notice that the main conclusion did not at all follow from the analysis, and most of the statement was completely unrelated. Nor did they seem to care that more than half of the text was devoted not to presenting the methods or results of the study, but to random political tangents and editorials. Imagine what would happen if someone reported results of a study that showed that e-cigarettes were barely of interest to nonsmokers and concluded, “Aggressive prevention efforts for youth are apparently not all that important for e-cigarettes.” You can bet that a reviewer would demand it be removed (even absent the completely unrelated claims about other products), and it is entirely possible the entire paper would be rejected because it was there in the draft version. This just goes to show that an anti-tobacco “public health” journal will publish any political assertion, so long as it is politically correct.
But the funny thing is, that negative conclusion could actually follow from such an analysis, unlike an affirmative conclusion that something should be done. If the data show there is no problem then it does not matter whether the “solution” is cost-effective and such — it is not called for in any case.
The funnier thing is that this is exactly what the CDC data actually show.
Recall that the last time CDC wrote about this data, from the National Youth Tobacco Survey (NYTS), they looked only at the total number of kids who had tried (which they called and continue to misleadingly call “used”) e-cigarettes. This was roundly criticized in this blog and elsewhere (see in particular, Rodu), for reasons that included the fact that most of the e-cigarette trialers were already smokers who were perhaps trying to practice THR. So this time the CDC (and FDA) authors are focusing on those who never smoked. They would get credit for this refinement if they had also stopped lying about what the results really showed.
So what did they find? They reported that according the 2013 NYTS, 263,000 never-smoking American youth, grades 6-12, had ever tried one puff of an e-cigarette. The lies start here. They describe this as “used e-cigarettes” even though no rational person would interpret “use” to mean “tried one puff ever”. The press release compounds this lie with the headline:
More than a quarter-million youth who had never smoked a cigarette used e-cigarettes in 2013
Um, no. Did you not read your own methods section. The results showed that in 2013, this many kids had ever tried an e-cigarette, not that they tried (let alone used) e-cigarettes in 2013. I never realized that you had to be an expert in epidemiology or econometrics (which these people clearly are not) to understand the concept of “ever”.
It is worth noting that while they drew on multiple years of the NYTS, their results are primarily about the 2013 survey which is suspiciously not yet available to the public, unlike past years. To tie this to my current series about peer-review, that means that the reviewers could not check the key numbers if they had the urge to actually review the analysis they were “peer reviewing”.
Further to the junk nature of the claim are all those significant figures. The accuracy of this survey is such that they would be lucky if the point estimate was within 20% of the truth. Thus the “more than” in the press release has about a 50-50 chance of being true. They reported three digits of precision to mislead the reader into thinking they know more than they really do (and, of course, the journal let them do so).
More important, how many nonsmoking kids are there in this cohort? About 25 million. Funny how the fact that less than 1% of them had ever tried an e-cigarette did not make it into the abstract or the press release, let alone the much smaller fraction of 1% who had taken a puff in 30 days before the survey and thus might(!) actually be e-cigarette users. It is also worth noting that this is far lower than the roughly 5% of the “youth” in the population who are of legal age to buy cigarettes, cannabis where it is legal, and e-cigarettes where they are age-restricted.
Apart from these misleading totals, their most important claim is that e-cigarettes cause never-smokers to roughly double their intention to smoke in the future. They do not use that exact phrase, but there is no doubt that is the message they are trying to engineer. Their claims about future smoking “intentions” (that is really the word they use) being higher among never-smokers who tried e-cigarettes are based entirely on responses to these two survey questions:
“Do you think you will smoke a cigarette in the next year?” and “If one of your best friends were to offer you a cigarette, would you smoke it?”
If someone merely answered “probably not” (or, of course, “probably yes” or “definitely yes”) to either of these, rather than “definitely not” to both, he was considered to have intentions to smoke. Intentions!
If someone were asked “will you commit murder in the next year?” or “willingly have sex with someone 40 years your senior” or “kill your dog” or “overdose on heroin”, she could not honestly say there is absolutely zero chance of that, no matter how unappealing the idea is to her. Any kid who is sufficiently familiar with reality, and not just dutifully echoing the “just say no” propaganda they are fed, would avoid responding “definitely not” to these questions or most any other questions about what the future holds. Geez, I would like to think that our high-school students are reality-based enough to know that there is always some nonzero possibility of such things occurring, making that “definitely” declaration almost always wrong.
Given that “definitely not” is not an answer that would be given by a thoughtful teenager, it obviously makes sense to do an alternative cut at the data with “probably not” combined with “definitely not”. That would be reported by any author that were trying to present an honest picture of the data and demanded by any sensible reviewer. It would be trivial to do. You probably do not have to go review the manuscript to know whether it was actually done.
Perhaps if asked be someone “will you commit genocide in the next year” the answer might legitimately be “definitely not” because he would kill himself before doing so, plus there is no physical way he could pull it off (though publishing anti-THR propaganda would be a valiant effort at it). But smoking a puff on a cigarette is simply not the big deal that people like those at CDC think it is. I would guess that a substantial portion of kids are sufficiently self-aware that they could manage the thought process, “I don’t want to ever smoke, never ever ever!, but if circumstances were such that taking one puff of an offered cigarette would improve my relationship with my circle of friends I would ‘probably yes’ or even ‘definitely yes’ just do it.”
In sum, this is an utterly ludicrous measure of “intention”. “Probably not” counts as intention. “I really don’t want to, but under imaginable circumstances I would take a puff” counts as intention.
Anything built on that foundation is junk, but just to carry this through, they report that 44% of the e-cigarette trialers and 22% of the never-trialers “intended” to smoke. (Actually they reported it to the next decimal place, but I refuse to mimic that error.) This doubling seems impressive until you realize that someone who would try an e-cigarette is obviously different from someone who would not in many ways (i.e., there is massive confounding). Since the vast majority of e-cigarette trialers (and presumably actual users), both kids and adults, are smokers or ex-smokers, someone who has access to e-cigarettes (because friends have them) is much more likely to be hanging around with smokers. This would make them rather less likely to absurdly insist there is no chance they would take a puff on a cigarette. Other obvious differences between the population include a willingness/desire to do something illicit that the CDC would not approve of (and perhaps more important, their parents and teachers) and simply having much of an unsupervised social life. It even seems like there would be a correlation between being willing to take an approximately harmless puff on an e-cigarette and being wise enough to never answer “definitely not” about something that could conceivably happen.
So given the obvious confounding problems CDC dutifully did a multivariate analysis, including variables for sex, middle-vs-high school (but, very strangely, not just age), race, smoking in their household, and exposure to pro-tobacco advertising. Um, yeah. I am sure that did a great job of adjusting for those fundamental differences I noted. Obviously this was a total fail at attempting to control for confounding, which CDC dutifully reported in their discussion section, as the peer review process would obviously demand of them. Haha, just kidding. Despite the discussion section being twice as long as the results, no form of the word “confounding”, nor any other acknowledgment of the problem (not even to explain why they would do the multivariate analysis in the first place!), appears there.
But even after adjusting for these relatively useless covariates, the difference dropped dramatically. How dramatically? It is hard to tell because they did not report the right statistics. They reported an adjusted figure of 1.7 — less than 2 but still close — but this is an exaggeration. That 1.7 is an odds ratio (OR). If you understand statistics (which apparently these authors do not — nor the wonderful peer reviewers who signed off on the article) you will immediately recognize this as wrong. To try to very briefly explain to everyone else (maybe the CDC people will read this and learn something):
ORs are a convenient correct measure, for technical reasons that I will not attempt to cover, when you are analyzing events with person-time as the denominator (i.e., rates). So you see them in epidemiology for comparing measures like heart attack rates. But they are not right for comparing proportions. It is easy to see with an example. If a proportion (such as, how many “intend” to smoke) for one group is 50% and for the referent group it is 25% then obviously we would properly think of the first being 2.0 times the second. But the ratio of the odds (which are 1:1 and 1:3, respectively) would give you 3.0, which is obviously not the natural or right way to think about the comparison. It would get even worse if the numbers were 66% and 33% — still properly thought of as double, but the OR=4. So, since that 25%-vs-50% comparison is close to the actual numbers, and since an OR of 3 would really mean 2, we can back-of-the-envelope that the 1.7 really means less than 1.5.
And that is how low the ratio is without actually controlling for the difference between the populations. If they had actually been able to control for the confounding, rather than just throwing in variables that have no obvious relationship to the real causes of confounding, it is a safe bet that the entire difference would have disappeared. That is, CDC is trying to claim that using e-cigarettes is causing people to “intend” to smoke, whereas the data support the claim that the association is actually lower than we would expect from the obvious confounding.
I am guessing that the authors misused the OR not to intentionally increase the number (though it served that purpose) but because they did not know how to do the analysis other than by running a logistic regression and reading the ORs off of the output, and so could not have done it right even if they wanted to. It is so reassuring that the people we depend on to protect us from Ebola cannot figure out first-year statistics.
Similarly, the authors also try to make a big deal about the time trend, just as CDC did last year based on earlier data: 79,000 (no, not 80,000 — exactly 79,000!) in 2011, increasing to 263,000 in 2013. As I pointed out last year, if you are measuring ever tried and looking at mostly the same population (it is a seven-year age range, and there is only one year of cohort turnover each year) of course there is going to be an increase. Indeed, positing that e-cigarettes were first used much in 2011 which is not far from the truth, if every kid who ever tried an e-cigarette took one puff when he was in 10th grade and never touched one again, you would see almost exactly this increase in “ever tried”. So this trend is actually less steep than we would predict given the increasing availability of e-cigarettes. I honestly do not know whether these people even understand that simple arithmetic fact.
Perhaps most important, CDC repeats its lies from last year’s propaganda, which ignored the fact that most kids who had tried e-cigarettes were smokers, this time by omission. It is easy to calculate from their results that those who had tried cigarettes were more than 20 times more likely to have tried an e-cigarette (or to have tried one in the last 30 days) than those who had never smoked. In other words, as far as we can tell from this, it may be that almost every kid trying an e-cigarette is doing so for purposes of THR. Of course, we cannot conclude that from the data, though it would be far less of a stretch than the conclusions that the authors actually made.
And a few random bits:
When I criticized the FDA’s proposed deeming regulation, you will recall how I pointed out that the cited references almost never actually supported the statement that was being made. My favorite from the CDC paper:
Furthermore, e-cigarette advertising is currently permitted on television, which is exposing youth to smoking images for the first time in nearly four decades (Warner & Goldenhar, 1989).
Yes, a paper from 1989 is being cited as the basis for a claim about current advertising. I know that many readers will be far more appalled by the reference to e-cigarettes being “smoking images”, and I agree that this is absurd political gamesmanship that has absolutely nothing to do with the study being presented. But I tend to find the utterly random use of citations — once again, allowed by the journal’s peer-review process — to be even more damning, just because it is so bright-line wrong.
Even worse are the blindly credulous citations of papers and claims that have been thoroughly exposed as junk science. But I will suppress the urge to point those out here.
I also cannot help but note that the authors use the unethical and inaccurate term “ENDS” (electronic nicotine delivery system) in the body of their paper (inaccurate because the NYTS does not ask whether the e-cigarettes that kids tried contained nicotine), whereas they use the proper term in their abstract and press release. Presumably the former serves as a dog-whistle for their ANTZ buddies, but they use proper language publicly so that they can more effectively scare the public.
And this postscript is good for a laugh in itself:
FUNDING: There were no sources of funding, direct or indirect, for the reported research.
Yes, that’s right. The authors did this for free. Oh, except for the fact that all of them are employees of two advocacy organizations (CDC and FDA CTP) which are dedicated to demonizing e-cigarettes, and undoubtedly knew that their comfortable employment would be threatened if they did not produce a hatchet job like this, even if one or two of them really wanted to be honest.
So, in conclusion, aggressive prevention efforts for youth use of e-cigarettes appear unnecessary. Just kidding. Though there is good support for that, it obviously does not follow from the present analysis, and this blog has far higher scientific standards than peer-reviewed “public health” journals. And speaking of peer review, I did this whole analysis (which is not every word you would want to write criticizing about that paper, but enough to clearly indict it as junk science as presented, and clearly not ready for publication) this afternoon. I wonder if the reviewers for Nicotine and Tobacco Research did not bother to think that hard about it, did not care how bad it was, or simply were not expert enough to understand the problems.
[h/t to @AllisonThinking, @ThaumaturgeRN, @ecigaretteforum, and Bill Godshall for some of the observations that appear here]