Tag Archives: epidemiology methods

Science lesson: The absurdity of “n deaths per year” and “leading preventable cause” claims about smoking.

by Carl V Phillips

Smoking is quite harmful. Lots of people choose to do it. Given these facts, you would think that people who warn/scold/fret about smoking, at the individual or population level, would see no reason to exaggerate. Yet they do. They lie constantly and habitually. Still, in spite of the lying, you might think that they would avoid making mantras of claims that are simply nonsense. Yet they do not.

I have covered most of this before, highlighting some of it as one of the six impossible things tobacco controllers believe, but have not pulled it all together before.

Consider first claims like “smoking causes 483,456.7 deaths per year in the U.S.” What does this even mean? It obviously does not mean what it literally says, that but for smoking, these individuals would not have died. Occasionally someone asserting these figures phrases the claim in a way that highlights the implicit suggestion of immortality, and is rightly ridiculed for it. But in fact, even the standard phrasing implies this if treated as natural language.

Understanding what this might(!) really mean requires understanding the epidemiology definition of causing a death, which, it is safe to say, few of those reciting the claims about smoking understand. This definition is actually, like much of epidemiology, fundamentally flawed, but it gets us closer to something meaningful. The textbook definition is that something is a cause of death if it made the death occur earlier than it otherwise would have. Notice that this means that every death (like every event) has countless causes. E.g., a particular death may have been caused (in this sense of it occurring when it did and not later) by all of: smoking, being born male, not eating perfectly, occupational exposures, and choosing a low-quality physician. (Notice that if we extend to a broader definition of causation, other causes include the evolution of life on Earth and the individual’s grandfather making it home from the war.)

This typical version of the definition is fairly useless because it includes exposures that caused the death to occur only a few seconds sooner than it would have. We are seldom interested in those. Indeed, by that definition, smoking is a cause of death for almost every smoker and former smoker. It is very likely that any smoker who who is not killed instantly by trauma would have survived longer because whatever disease killed her would have developed more slowly, or simply because the body would have functioned for a few more minutes. So more useful definitions of a cause of death would be something that we estimate caused the death to occur a month, or a year, or five years earlier than it would have. Note that a far more useful measure, in light of these problems, is “years of potential life lost” (YPLL).

So which of those definitions is the “X deaths per year” claim based on, given that it is clearly neither the literal meaning (with its implication of immortality) nor the faulty textbook epidemiology definition (which would include approximately all deaths among ever-smokers)? The answer is: none of them. Those statistics are actually a toting up of deaths attributed to a particular list of diseases, each multiplied by an estimate of the portion of those cases that were caused by smoking, in historical U.S. populations. That is, it is the number of lung cancer deaths among smokers, multiplied by the portion of such deaths that are attributed to smoking, plus the number among former smokers multiplied by the attributable fraction for former smokers, plus those for heart attacks, plus those for a few dozen other specific declared causes of death.

As you might guess, based on who is doing the toting, these numbers are biased upwards in various ways. Still, it would be possible to estimate that sum honestly (no one has tried to do so for a few decades, but it would be possible). But the resulting measure would obviously not properly be described “deaths caused by smoking.” It would not be that hard to identify what the figure really is, especially in serious written material like research papers or government statements: “each year in the U.S. smoking is estimated by the CDC to cause X fatal cases among 29 diseases.” Of course, most “researchers” and “experts” in the field do not even know this is what they are trying to say.

There are also several problems with the numbers themselves, not just the phrasing. First there is the noted, um, shading upward of the numbers. Second, as I alluded to in the third paragraph, the statistic is always presented with too much precision. Even two significant digits (e.g., 480,000) is too much precision. The estimates of the smoking-attributable fraction of cases of those diseases are not precise within tens of percent for smokers, let alone former smokers, a much more heterogeneous category, at best, making even one significant digit (e.g., 400,000) an overstatement of the precision.

Third, and more important for most versions of the statistic is that “in historical U.S. populations” bit. The statistics you seen for other countries or the whole world are based on implicit assumptions that everyone share Americans’ health status and mix of exposures, because almost all the estimates come from U.S. studies (and those in the mix that do not are almost all from the countries that are most similar to the U.S.). At best, the estimated increase in risk for fatal cases of the disease ported to calculation for other populations, even though this varies across populations. That is, it is assumed that if the estimate is that half of all heart attacks among ever-smokers are caused by smoking, then that same multiplier is applied to heart attacks among ever-smokers in the other population. Worse, sometimes the attributable fraction itself is just ported, so if a quarter of all heart attacks in the U.S. are attributed to smoking, then that multiplier is applied to all heart attacks in other populations. That would mean, e.g., if a particular population has a lot of extra cases of cancers due to diet, the same fraction of those cancers that is due to smoking in the U.S. is attributed to smoking there.

Fourth, and worse still, the forward-looking versions of the statistics would be innumerate nonsense even if none of the other problems existed. These include the infamous prediction of a billion deaths from smoking in the 21st century, as well as assertions about the fate of cohorts who are taking up smoking now. The number of deaths from a list of diseases that are attributable to smoking is going to vary hugely not just across populations, but time. This is first-week Epidemiology 101 stuff. Population and time matter. There are no constants in epidemiology. The number of deaths from particular diseases will vary with technology. The attributable fraction will vary with the prevalence of other risk factors. Oh, and for those other changes, good news often makes things “worse”: An asteroid destroys higher life on Earth, and smoking stops causing any deaths. War, hunger, and infections are reduced and smoking causes a lot more cases of fatal diseases.

In summary, these statistics are: (a) not actually the number of deaths caused by smoking, (b) exaggerated, (c) far less precise than claimed, even setting aside the intentional bias, (d) only valid for a few populations, and (e) only applicable to the present (or, really, the recent past).

Moving on to the “leading preventable cause of death” claims, this mantra is equally absurd if you pause to actually look at the words. What does “preventable” mean? Typically in such contexts, it means “some obvious top-down action could have averted it.” So, for example, of the 3000 deaths from Hurricane Maria, a few score were hard to do much about. Every one of these was “preventable” in some sense (fly the particular person to Miami in advance of the storm) but this is meaningless; preventing someone, probably a few dozen someones, from getting killed was not a real option. But the vast majority of those deaths were meaningfully preventable — in the sense that an operationalizable action could have kept them from happening — with a competent relief operation.

So if this normal use of the word is what tobacco controllers mean when they recite this mantra, then they are basically testifying that they are horrifically incompetent. They spend their lives trying to prevent this from happening, and they fail even though it is doable. But while it is true that they are generally horrifically incompetent at what they do, it is clearly not doable. Smoking is not preventable by this standard sense of the word.

Perhaps they are saying it is theoretically preventable, in that sense, but no one has figured out how to do it. At least that is plausible, but then the full statement is clearly false. There are more important causes of death that are theoretically preventable. Deterioration with age of cellular repair mechanisms seems to pretty clearly top the list. Humanity will figure out how to largely prevent that. This bit of prevention (in the “we will figure it out eventually” sense) dwarfs preventing the deaths by smoking. Indeed, it will prevent a lot of the fatal disease cases that are caused by smoking. (I have a vision of one of my kids find this post in an archive 200 years from now, and being sad that this technology came a few decades too late for me. And for most of you too — sorry.)

Most likely, what they are not-quite-saying is that each individual who “dies from smoking” (i.e., has a fatal case of a disease that was caused by smoking) could have made a choice to not have that happen. In some sense, this suffers from the same problem that such a claim about hurricanes or earthquakes does: Yes, every death from a collapsed building could have been prevented by the person choosing to be in a different building. But it has a bit more legitimacy since it is obvious what the safer choice is and the risk is high enough probability to influence the decision. The problem here is to make this meaningful statement, tobacco controllers would have to acknowledge that smoking and other tobacco product use is an individual choice. They are not willing to say that out loud — and thus admit that their entire enterprise is devoted to keeping people from making the choices they want — so they hide it behind weasel words like “preventable”.

But just because the statement “the leading cause of death among individual behavioral choices” is meaningful does not mean it is right. Indeed, it is obviously wrong. Go back to the epidemiology textbook definition of a cause of death. Smoking is a cause of death, by that definition, for approximately everyone who smokes. But eating a less-than-optimal diet is, for the same reason, a cause of death for everyone who eats less than optimally. Two or three times as many deaths occur among people who ate less than optimally (i.e., basically everyone), as compared to those who smoked, so smoking is clearly not “leading”. Of course, no one really thinks in terms of that textbook definition. So how about if we limit it to deaths that occurred a year earlier than they would have. It is pretty difficult to imagine figuring out the numbers, but I would expect diet still has the edge. How about five years? At that level, smoking might really be leading. How about putting it in terms of YPLLs? Yes, it is probably true that smoking costs more YPLL than any other individual choice.

Aha, so they are right!

Um, yeah. We just have to assume that these stupid phrases really represent deep and subtle thinking on the part of those using them. By “preventable” they actually mean resulting from individuals’ behavioral choices. By “cause of death” they actually mean cause of YPLLs. And their declaration that it is true, rather than speculation, is based on valid estimates of the comparative number of YPLLs from different behavioral choices, even though they never cite such evidence. Also, by “n deaths” they mean “n cases of particular fatal diseases attributed to smoking, if you believe our numbers, and assuming that future looks exactly like the past and all population are like the U.S.” Giving someone the benefit of the doubt is sometimes noble, but it would just be silly in this case.

The bottom line are that these mantras are just as false as much of the rest of what tobacco controllers claim. Moreover, they are not just factually wrong, but are a demonstration of just how thinking-free the whole endeavor is. At least things like “second-hand smoking causes 30% of all heart attacks” or “vaping is causing more kids to take up smoking” are meaningful claims. They are obviously false, but they are valid hypotheses and are only false because empirical evidence shows they are false, not because it is impossible for them to be true based on some simple fundamentals of how we know the world works.

Sure, people say things all the time such that, if anyone paused to think and ask the question, would not stand up to a “what does that even mean?” query. We are not always precise in all our thinking, let alone how it translates into words. But the claims in question are not fleeting thoughts or ad hoc word choices. They are mantras that getting written or said a thousand times per day by supposedly credible people in supposed credible contexts. The fact that they cannot pass a “what does that even mean?” test is one of the greatest overlooked testaments to the fundamental lack of seriousness in public health. The fact that they get repeated by others is a testament to how influential sloppy public health thinking is, even over those who are attempting to position themselves as opponents of it.

Let’s try to get our criticisms right, shall we? (More on the recent “vaping causes heart attack” study)

by Carl V Phillips

Sigh. We are supposed to be the honest and scientific ones in the tobacco wars. But we won’t be if we are not, well, scientific. Case in point are the criticisms of the recent paper with Glantz’s name on it that has been erroneously said to suggest that vaping doubles the risk of heart attack.

Incidentally, the meaningless statistic in the paper is a RR of 1.8, which is not double. Also, when the paper was originally written as a student class project (not by science students, mind you, but by medical students), that statistic was 1.4. That was when Glantz heard about it, managed to get the kids to put his name on the paper, and taught them how to better cook their numbers. That “contribution” has him being called the lead author.

The paper is junk science. So are most of the criticisms of it. If only someone with expertise in these methods had written a critique of it that people could look to. Oh, wait, here’s one in The Daily Vaper from February. That was based on a poster version of the paper, but as I noted in the article, “It has not yet appeared in a peer-reviewed journal, but it will, and the peer-review process will do nothing to correct the errors noted here.” I wish I could claim this was an impressive prediction, but it is about the same as predicting in February that the sun will rise in August.

You can go read that if you just want a quick criticism of the paper, and also look at the criticism on this page of some hilarious innumeracy Glantz piled on top of it. In the present post I am mostly criticizing the bad criticisms, though at the end I go into more depth about the flaws in the paper.

About half the critiques I have seen say something along the lines of “it was a cross-sectional study, and therefore it is impossible to know whether the heart attacks occurred before or after someone started vaping.” No. No no no no no. This is ludicrous.

Yes, the data was from a cross-sectional survey (the 2014 and 2016 waves of NHIS, mysteriously skipping 2015). And, yes, we do not know the relative timing (as discussed below). But “therefore it is impossible to know” (or other words along those lines)? Come on. A cross-sectional survey is perfectly capable of measuring the order of past events. Almost every single cross-sectional survey gives us a pretty good measure of, for example, whether someone’s political views were formed before or after the end of the Cold War. Wait! what kind of wizardry is this? How can such a thing be known if we do not have a cohort to follow? Oh, yeah, we ask them their age or what year they were born. Easy peasy.

Almost every statistic you see about average age of first doing something — a measure of the order in which events occurred (e.g., that currently more Americans become smokers after turning 18 than before, but most extant smokers started before they were 18) — is based on cross-sectional surveys that ask retrospective questions. It is perfectly easy to do a survey that asks heart attack victims the order in which events occurred. Indeed, any competent survey designed to investigate the relationship in question would ask current age, age of smoking initiation and quitting, age of vaping initiation and quitting, and age at the time of heart attack(s), ideally drilling down to whether smoking cessation was just before or just after the heart attack if they occurred the same year. We would then know a lot more than the mere order. But NHIS does not do that because, as I noted in the DV article, it is a mile wide and an inch deep. It is good for a lot of things, but useless for investigating this question. It can be used, as it was here, for a cute classroom exercise to show you learned how to run (not understand, but run) the statistical software from class. But only an idiot would think this paltry data was useful for estimating the effect.

(A variation on these “therefore it is impossible” claims is the assertion that because it is a cross-sectional study, it can only show correlation and not causation. I am so sick of debunking that particular bit of epistemic nonsense that I am not even going to bother with it here.)

So, we do not know the order of events. We can be confident that almost all the smokers or former smokers who had heart attacks smoked before that event. We do not know whether subjects quit smoking and/or started vaping before their heart attacks. Given that vaping was a relatively new thing at the time of the surveys, whereas heart attacks were not, it seems likely that most of the heart attacks among vapers occurred before they started vaping. This creates a lot of noise in the data.

A second, and seemingly more common, erroneous criticism of the analysis is that this noise has a predictable direction: “Smokers had heart attacks and then, desperate to quit smoking following that event, switched to vaping, thereby creating the association.” Again, no no no. Heart attacks do cause some smokers to become former smokers, but there is little reason to believe they are much more likely than other former smokers to have switched to vaping. Some people will have heart attacks and quit smoking unaided or using some other method. Indeed, I am pretty sure (not going to look it up, though because it is not crucial) that most living Americans who have ever had a heart attack experienced that event before vaping became a thing. So if they quit smoking as a result of the event, they did not switch to vaping. Also it seems plausible that the focusing event of a heart attack makes unaided quitting more likely than average, as well as making “getting completely clean” more appealing.

Of course, an analysis of whether behavior X causes event Y should not be based on data that includes many Y that occurred before X started. That much is obviously true. NHIS data is not even a little bit useful here, which is the major problem. There is so much noise from the heart attacks that happened before vaping this that the association in the data is utterly meaningless for assessing causation.

But there is no good reason to assume that this noise biases the result in a particular direction. If asked to guess the direction of the bias it creates, a priori, I probably would go in the other direction (less vaping among those who had heart attacks compared to other former smokers). The main reason we have to believe that the overall bias went in a particular direction is that the result shows an association that is not plausibly causal. We know the direction of the net bias. But this is not the same as saying we had an a priori reason to believe this particular bit of noise would create bias in a particular direction. When we see a tracking poll with results that are substantially out of line with previous results, it is reasonable to guess that random sampling error pushed the result in a particular direction. But we only conclude that based on the result; there was not an a priori reason to predict random sampling error would go in a particular direction.

Moreover, we do not have any reason to believe that the net bias was caused by this particular error, because it has a rather more obvious source (see below).

Sometimes we do have an a priori reason to predict the direction of bias caused by similar flaws in the data, as with the previous Glantz paper with an immortal person-time error (explained here, with a link back to my critique of the paper). If the medical students had engaged in a similar abuse of NHIS data to compare the risks of heart attack for current versus former smoking, then the direction of bias would be obvious: Heart attacks cause people to become former smokers, which would make former smoking look worse than it is compared to current smoking. I suspect that people who are making the error of assuming the direction of bias from the “Y before X” noise are invoking some vague intuition of this observation. They then mistranslate it into thinking that former smokers who had a heart attack are more likely to be vapers than other former smokers.

This brings up a serious flaw in the analysis that I did not have space to go into in my DV article: The analysis is not just of former smokers who vape, but includes people who both smoke and vape, as well as the small (though surprisingly large) number of never-smokers who vape. If vaping does cause heart attacks, it would almost certainly do so to a different degree in each of these three groups. For reasons I explored in the previous post, different combinations of behaviors have different effects on the risk of an outcome. Vaping probably is protective against heart attack in current smokers because they smoke less than they would on average. If a smoker vapes in addition to how much she would have smoked anyway, the increased risk from adding vaping to the smoking is almost certainly less than the (hypothesized) increased risk from vaping alone. Whatever it is about vaping that increases the risk (again, hypothetically), the smoking is already doing that. Thus any effect from adding vaping to smoking would be small compared to the effect from vaping compared to not using either product. Most likely the effect on current smokers would be nonexistent or even protective.

Indeed, this is so predictable that if you did a proper study of this topic (using data about heart attacks among vapers, rather than vaping among people who sometime in the past had a heart attack; also with a decent measure of smoking intensity — see below), and your results showed a substantial risk increase from vaping among current smokers, it would be a reason to dismiss whatever result appeared for former smokers. This is especially true if the estimated effect was substantial in comparison to the estimate for former- or never-smokers. If you stopped to think, you would realize that your instrument produced an implausible result, and thus it would be fairly stupid to believe it got everything else right. This is a key part of scientific hypothesis testing. Of course, such real science is not part of the public health research methodology. Nor is stopping to think.

It is a safe bet that the students who did this analysis understand none of that, having never studied how to do science and lacking subject-matter expertise. Glantz and the reviewers and editors of American Journal of Preventive Medicine neither understand nor care about using fatally flawed methods. So the analysis just “controls for” current and former smoking status as a covariate rather than separating out the different smoking groups as it clearly should. This embeds the unstated — and obviously false — assumption that the effect of vaping is the same for current, former, and never smokers. Indeed, because “the same” in this case means the same multiplicative effect, it actually assumes that the effect for current smokers is higher than that for former smokers (because their baseline risk is higher and this larger risk is being multiplied by the same factor).

Though they did not stratify the analysis properly, it is fairly apparent their results fail the hypothesis test. The estimate is driven by the majority of vapers in the sample who are current smokers, so they must have had a substantially greater history of heart attacks.

There is a good a priori reason to expect this upward bias, as I noted in the DV article, but it is not the reason voiced in most of the critiques. It is because historically vapers had smoked longer and more than the average ever-smoker. This is changing as vaping becomes a typical method for quitting smoking, or a normal way to cut down to having just a couple of real cigarettes per day as a treat, rather than a weird desperate attempt to quit smoking after every other method has failed. Eventually the former-smoking vaper population might look just like the average former-smoker population, with lots of people who smoked lightly for a few years and quit at age 25, and so on. But in the data that was used, the vapers undoubtedly smoked more than average and so were more likely to have a heart attack (before or after they started vaping).

Controlling for smoking using only “current, former, never” is never adequate if the exposure of interest is associated with smoking history and smoking causes the outcome, both of which are obviously true here. If there are no such associations then there is no reason to control for smoking, of course. Thus basically any time you see those variables in a model, you can be pretty sure there is some uncontrolled confounding due to unmeasured smoking intensity. In this case, you can be pretty sure that its effect is large and it biases the association upward.

In short, the results are clearly invalid. There are slam-dunk criticisms that make this clear. So let’s try to stick to those rather than offering criticisms that are as bad as the analysis itself. Ok?

Dual use and the arithmetic of combining relative risks

by Carl V Phillips

It was called to my attention that UCSF anti-scientist, Stanton Glantz, recently misinterpreted the implications of one of his junk science conclusions. Just running with the result from the original junk science (which I already debunked) for purposes of this post, Glantz make the amusing claim that because vaping increases heart attack risk by a RR=2 and smoking by a RR=3 (set aside that both these numbers are bullshit) then dual use must have a RR=5. WTAF?

First off, there is no apparent way to get to 5 except by pulling it out of the air. It is apparent that Glantz thinks he was adding the risks: 2+3=5. Except you cannot add risks that way. Every first-semester student knows the formula for adding risks, which is based on the excess risk. Personally I have always thought that having students memorize that as a formula, rather than making sure they inuit it, is a major pedagogic failure. But that aside, they do memorize the formula, which subtracts out the baseline portion of the RR then adds it back, as should be obvious: (RR1 – 1) + (RR2 – 1) + 1. So, the additive RR = 2-1 + 3-1 + 1 = 4. Think about it: If you “added” Glantz’s way then two risks that had RR=1.01 (a 1% increase in risk) would add to 2.02 (more than double). Or two exposures that reduced the risk by 10% (RR=0.9) would add to an increased risk, RR=1.8. Not exactly difficult to understand why this is wrong.

Additivity of risks is a reasonable assumption if the risk pathways from the exposures are very independent. The excess risk of death caused by both doing BASE jumping and smoking is basically just the excess risk of each added together. (A bit less because if one kills you, you are then not at risk of being killed by the other.) If the risks from the two exposures travel down the same causal pathways (or interact in various other ways), however, adding is clearly wrong. If vaping causes a risk (for heart attack in this example, though that does not matter), then smoking almost certainly causes the same risk via the same pathway. There is basically no aspect of the vaping exposure that is not also present with smoking (usually more so, of course). When this is the case, there are various possible interaction effects. One thing that is clear, however, is that simply adding the risks as if they did not interact is wrong.

The typical assumption built into epidemiology statistical models is that the risk multiply. This is not based on evidence this is true, but merely on the fact that it makes the math easier. The default models that most researchers tell their software to run, having little or no idea what is actually happening in the black box, build in this assumption. It is kind of roughly reasonable for some exposures, based on what we know. In the Glantz case, this would result in a claim of RR = 2 x 3 = 6, which is also not the same as 5.

So, for example, if a certain level of smoking causes lung cancer risk with RR=20, and a certain level of radon exposure causes RR=1.5, then if someone has them both, it is not unreasonable to guess that the combined effect causes RR=30. The impact on the body in terms of triggering a cancer and then preventing its growth from being stopped seems like it would work about like that. On the other hand, there are far more examples where the multiplicative assumption is obviously ridiculous. If BASE jumping once a week creates a weekly RR for death of 20, and rock climbing once a week has RR=2, doing each once a week obviously adds, as above, for RR=21, rather than multiplying to 40. (Aside: most causes of heart attack are probably subadditive, less than even this adding of the excess risks, as evidenced by dose-response curves that flatten out, as with smoking.)

But importantly, notice the “each once a week” caveat. That addresses the key error with the stupid “dual use” myths by specifying that the quantity of each activity was unaffected by doing the other. If, on the other hand, someone is an avid BASE jumper, doing it whenever he can get away, and he takes up rock climbing, the net effect is to reduce his risk. The less hazardous activity crowds out some of the more hazardous activity. This, of course, is what dual use of cigarettes and vapor products (or any other low-risk tobacco product) does. This is not complicated. Every commentator who responds to these dual use tropes — and I am not talking epidemiology methodologists, but every last average vaper with any numeracy whatsoever — points this out. Vaping also does not add to the risk of smoking because it almost always replaces some smoking rather than supplementing it. In this case, using Glantz’s fictitious numbers, it would mean the RR from dual use would fall somewhere between 2 and 3. Not added. Not multiplied. Not whatever the hell bungled arithmetic that Glantz did. Between.

As I said, everyone with a clue basically gets this, though it is worth going through the arithmetic to clarify that intuition. It is not clear whether Glantz really does not understand or is pretending he does not — as with Trump, either one is plausible for most of of his lies. Undoubtedly many of his minions and useful idiots actually believe it is right. The “dual use” trope gets traction from the fact that interaction effects from some drug combinations are worse than the risk of either drug alone. Many “overdose” deaths are not actually overdoses (the term that should be used for all drug deaths is “poisonings” to avoid that usually incorrect assumption), but rather accidental mixing of drugs that have synergistic depressant effects, often because a street drug was secretly adulterated with the other drug.

But as already noted, that is obviously not the case with different tobacco products, whose risks (if any) are via the same pathways. Even if total volume of consumption was unaffected by doing the other (as with “each once a week”) the risks would not multiply and would probably not even add. Since that is obviously not true — since in reality, consuming more of one tobacco product means consuming less of others — the suggestion is even more clearly wrong. In fact, using the term “dual use” to describe multiple tobacco products makes no more sense than saying that about someone who smokes sticks that came out of two different packs of Marlboros on the same day.

In the context of tobacco products, the phrase “dual use” is inherently a lie. It intentionally invokes the specter of different drugs (or other exposure combinations) that have synergistic negative effects. That is not remotely plausible in this case. It also intentionally implies additivity of the quantity of exposure (“doing all this, and adding in this other”) when it is actually almost all substitution, as with which pack you pull your cigarette from. To the extent that it increases total consumption of all products, this is a minor effect (a smoker who vapes not only as a partial substitute, but also occasionally when he would not have smoked even if he did not vape). This only matters to someone who does not care about risk, let alone people, and only cares about counting puffs.

There is a long list of words and phrases that when used by “public health” people should make you assume they whatever they are saying is a lie: “tobacco” (when used as if it were a meaningful exposure category), “addictive” (meaningless for drugs with little or know functionality impacts), “chemical” (a meaningful word, but invariably used because it sounds scary), and “carcinogen” (when used as a dichotomous characterization, without reference to the relevant dosage and risk). “Dual use” should be added to this list, in the same general space as “chemical”, another word that is inherently just a simple boring technical descriptor, but that is almost exclusively used to falsely imply negative effects.