Category Archives: Science lesson

Science Lesson: Conflating age with inevitable temporality (i.e., some things first occur in youth merely because youth comes first)

by Carl V Phillips

A random science lesson, because I have not written a good “the conventional wisdom — how everyone looks at this and thinks is self-evidently true — is not the only plausible explanation” lesson in a while (other than tweet storms), and just want to. I was triggered on the topic by some chatter I saw about a recent paper, though neither of those is particularly important (so no links).

Consider an example from another realm: A large portion of significant original contributions in theoretical mathematics are figured out, or at least the seeds are completed, when the author is under 25-years-old, or even under 20. The conventional wisdom is — or was (I have been out of that field for a long time) — that people’s sheer physical brainpower in this area declines with age, and that this is the only time someone has the ability to outperform all who have come before them. It is like being a professional athlete. You can be a perfectly solid athlete or science geek at 60 if you have the natural skills and keep at it, but to be among the absolute best — among the 0.001% who can be a performance-level jock or breakthrough mathematician — you have to have both the natural skills and be at your lifecycle physical peak.

But there is a plausible alternative theory that was pointedly ignored in that conventional wisdom: Generations of mathematicians have already worked out everything, within the bounds of what occurs to them to work on, that can be done by just plugging away at it. Therefore, new breakthroughs only come when someone is wired enough differently to see something beyond that, either in terms of recognizing something outside the existing bounds to pursue or some striking insight into a within-bounds problem. That is, they need to not just be solid in the skills of the field, but have one little cognitive quirk that no one else had. Either they have that when they are 16 or they don’t. If they do, they make their breakthrough early because they can. It is not about age — if one was somehow prevented from making the breakthrough for a couple of decades (but managed to keep up his skills in the field and was not scooped), he would have made it later.

Perhaps the relative contributions of those two factors has been largely resolved — as I said, I have been out of that area a long time. In contrast with the tobacco realm, most everyone who is aware of that debate is a smart clear thinker, so they may have long since worked out how much each of the stories explains the association of age and breakthroughs. But the point is that the naive explanation for something being associated with age — that it must have been entirely caused by age itself — was not so obviously correct as the conventional wisdom had it.

This is a metaphor, of course, for all the claims about tobacco use initiation, habituation, “addiction”, and such that are attributed to age because they are associated with age. This is a fail for exactly the reason found in the alternative theory of math prodigies: If something were able/likely to happen sometime in someone’s life, but not in most people’s, the fact that it happened early among the former (because it could) is not informative.

So we have the conventional wisdom that because smokers (etc.) mostly start fairly early in life, if you stop them from starting early, they never will. This is undoubtedly true to some extent. Everyone gets more set in their ways about what they do and do not do after adolescence. For smoking specifically, having adult-level judgment and a more forward-looking mindset makes it much less appealing (though this is not true for low-risk and potentially net beneficial smoke-free products). But it is obviously not nearly as true as is generally claimed. Someone who would have used a product at 16, but is somehow kept from doing so for two years does not magically revert to having the average lack of interest (which means being below the line for inclination to use the product) at 18. The same is true if you substitute age pairings 18…21 or even 16…40.

My goal here is to just immunize readers against the common naive error by planting the idea, so I am not going to delve deeply into the data. But just notice that transitioning to “smoker” status has gone down sharply among 14-year-olds in the US population, but not 18-year-olds. It is down overall, of course, but it is impossible to not notice that some of the “success” at earlier ages consists of delay rather than elimination. If the conventional wisdom were true, we should not have seen the sharp rise in the average age for that transition; the conventional wisdom says that the people who are pulling that average up do not exist.

The issue is clearer still for claims about early-initiating smokers (etc.) being more habituated (usually called “addicted” of course, but my readers will understand why that is bullshit rhetoric). If there is any variation within the population in terms of who is inclined to become strongly habituated — and obviously there is, due to both biological and social factors — then of course we see this. Those who are most inclined quickly become regular consumers upon first trialing at, say, 13. Those eventual-smokers (etc.) who ramp up more slowly were not so enamored, and so waited until it was easier to do. The former group are undoubtedly less likely to quit, have higher “dependence” scores, etc. The rhetoric attributes all of this obvious confounding to causation.

This does not means that there is no biological effect of early smoking (etc.) that causes greater inclination later in life, of course. But it does mean that the main body of evidence deployed in support of that claim is worthless. My readers presumably understand that the evidence deployed in support of “gateway” claims is bullshit because it merely observes the inevitable association across individuals choosing to use very similar products. Any association that is inevitable due to confounding cannot be said to be evidence of any causation without further serious analysis, analysis that tobacco control “researchers” never do. The present case is a bit more subtle than the gateway case, but it is exactly the same problem.

Similarly, these observations do not mean that somehow preventing an incidence of initiation at 16 is always just be a delay rather than permanent prevention. There is some probability of each. There is ample reason to believe that the probability of mere delay is fairly high. Yet the claims based on the observed association almost always bake-in the unstated and unexamined assumption that the probability of it being mere delay is approximately zero.

I did not become a regular drinker until my 30s, or a regular user of nicotine products and sometimes [redacted because we live in a fucked-up anti-liberty police state when it comes to stuff like this] until later still. But I trialed all of these before I was 20 and did a bit during my 20s. Those who want to say “it is all about ‘youth’ initiation!!!” will spin this into supporting their claims. Look closely at their claims and you will see that most of them would attribute my later behavior to those largely forgotten moments from adolescence. I can tell you there was no causal continuity between the trailing and later period of ongoing use, except via the confounding pathways. Granted I am a bit unusual — I have taken up quite a few things at time in my odd life that very few people ever do if they do not start at a much younger age: professional popular writing, various sports, farming, having babies. But the oddity there just illustrates the point that acting upon willingness or interest gets mistaken for causation, because willingness and interest are usually not kept latent for so long.

Consider one more metaphor that illustrates a different angle on this: adults who choose to visit Disney World (i.e., because they like to, not just because they are roped into taking their kids). There is undoubtedly a huge association between this and having visited as a child. Undoubtedly it is causal to some extent, but it would be obviously stupid to assume the association is all causal. Among those negative for both traits are those with a religious or semi-religious objection to visiting, those who disdained the idea as children (often due to their particular subculture think of it as belonging to Others), and those for whom making the trip is unaffordable. Those traits tend to be fairly persistent through the lifecycle, and this alone creates an association. Among those positive for both traits are those who just love stuff like that, and so pushed their parents to taken them and later choose to go again when they could. This increases the association with no causation in sight yet. Finally, among those positive for both are those who go back because they remember how much they enjoyed it as kids, the causal group. The “logic” of the tobacco control literature and rhetoric would be to claim that the association is caused entirely by the latter group.

I would assume that the marketing people at DisneyCorp — who are presumably much better at their jobs than most tobacco researchers and pundits are — have this all worked out and make extensive use of that knowledge. It would undoubtedly be possible to form honest estimates that separate the contributions of causation-by-age and mere temporality in the tobacco space also. But few in that space even recognize this is an issue, and most of them want to pretend it is not, and few of them have the skills to do the (actually pretty simple) analysis to try to sort it out.

It is one more persistent set of lies (partially intentional, partially due to Dunning-Kruger) to be aware of when analyzing tobacco control claims.

Sunday Science Lesson: Calling vaping/tobacco use an “epidemic”: it’s even stupider than you might think

by Carl V Phillips

A correspondent suggested to me that those who are not population health experts have a gut feeling that all this rhetoric about “epidemics” — of tobacco use, of teen vaping, and such — is an innumerate misuse of the term. But few really understand why.

The first problem with these claims, which I think most people get, is that “epidemic” refers to a disease, and these behavioral choices are not diseases. Of course, words get used metaphorically, and it is a somewhat complicated technical term that thus has hundreds of pop-level dictionary definitions floating around. But keep in mind that the people misusing the word in this case are supposed health science experts: The WHO entitles its flagship semi-annual reports on anti-tobacco policies, “WHO report on the global tobacco epidemic 20xx”. The U.S. FDA has been banging on about an epidemic of teenage vaping. This misuse of the terms frequently appears in public health journals. This is not ok.

Yes there are colloquial common-language uses of the term that are not limited to actual diseases (and there are also those that are narrower still and use the word to refer to infectious disease outbreaks specifically). But health “experts” and officials should not be using sloppy colloquial definitions of technical scientific words. It would be like economists using “efficient” to mean “quick and effective” or geneticists using “fitness” to refer to someone’s cardio statistics. (In case you do not know, each of those is an important technical term in its field, with a particular meaning.) It it similar to a physician using “cancer” or “poisoning” metaphorically when talking to a patient — “I’m afraid that you have cancer…. Of your motivation, which is keeping you from really focusing on your physical therapy.” (I have a recollection of Dr. Hibbert on The Simpsons having a conversation in which he keeps saying things like this, alarming the family for a beat before he makes clear he is not really meaning the words. Anyone know how to find a clip of that?)

So that alone is a simple, obvious fatal error in this usage. Anyone misusing the word to refer to a behavior they dislike, rather than a disease, and doing so in the context of health science, is engaging in propaganda rather than an attempt at accurate communication. But even if we set that aside — ignore the inappropriate metaphor of calling a behavior a disease — the use of the term is still blatantly incorrect.

For a disease to be in an epidemic state, it needs to have an incidence rate that is not necessarily high, but that is spiking above the normal baseline. (It also needs to be affecting a fairly broad population and not have a single source of exposure, as often happens with foodborne disease, or we instead call it an “outbreak”. But that is not really relevant for present purposes.) So, even though there are always a lot more heart attacks and HIV infections compared to Zika infections, Zika has recently been epidemic in some populations, while the others were not. The big numbers for the others are (usually) just the normal incidence rate. Exactly what is enough of a spike to qualify as epidemic is not precisely defined, but it is safe to say that (genuine) experts would not call a sudden jump of merely 10 or 20%, let alone a steady upward trend with similar increases, an epidemic.

So is tobacco use a global epidemic (accepting the metaphorical non-disease use of the word), as suggested by the WHO? Clearly not. It is actually in decline. In almost every population, the prevalence of tobacco use and, more importantly, the incidence of initiation are declining. I am pretty sure that there is not a single country where the current increases in tobacco use would qualify as an epidemic, though I might have overlooked somewhere. If you drill down enough, undoubtedly there are some subpopulations with recent spikes that would qualify as epidemic. But you have to look hard, and it is clearly not global.

(Aside: The number of smokers in the world has continued to increase, despite the decline in incidence and prevalence. The population is increasing faster than smoking rates decrease in all but the substitution-miracle countries. And of course, “tobacco use” does not decrease when product substitution reduces smoking. In addition, in a few large populations — extremely poor people who finally have enough income to afford tobacco products — rates are increasing, though not at epidemic levels. Bottom line: do not get misled by tobacco controllers when they temporarily switch their rhetoric from “epidemic!!!” to “we are close to eliminating smoking!!!” They are not.)

But how about more specific claims like FDA’s “epidemic of vaping among U.S. teenagers”? For that claim, the metaphor is even more strained (and, again, clearly inappropriate coming from an ostensibly scientific health agency): At least smoking can be metaphorically likened to a disease because it causes a lot of disease outcomes and so is similar to not-yet-morbidity-causing cases of an often-harmful infection. But vaping and other smoke-free tobacco use are approximately harmless. Saying “vaping epidemic” is a lot like saying “Fortnite epidemic”; yes, I suppose you can metaphorically refer to a sharp increase in the initiation of any consumption choice, but when the serious disease risks are trivial, it seems like a pretty stupid choice. You should just go with “sharp increase”. Though in cases like these, where the sharp increase in consumption is inevitable because no one was using the product a few years ago, even that is kind of stupid to say.

Worse is that this is an example of the “look at just one entry in the ledger” game that tobacco control rhetoric is notorious for. Compare their game when they pretend that smoking costs society money by toting up costs of treating (frequently fatal) smoking-caused diseases, ignoring the (almost exactly offsetting) reduction in the cost of treatment for some later disease that never happened because the person died from smoking. They also ignore other foregone consumption (housing, food, etc.) that results from earlier deaths, which add up to meaning that smoking’s health effects cause an enormous net savings in social resources. It is the same game used by those who say we cannot afford single-payer healthcare because it would cost $X trillion, and we cannot afford that — never mind that we are currently spending 30% more than that on healthcare, and would save that cost. It is as if someone said “I would eat out less, but I could not afford the resulting increase in my grocery expenditures.”

In the present case, FDA et al. ignore the decline in teenage smoking that offsets (and is pretty clearly caused by) the increase in vaping. What they are saying is equivalent to breathlessly panicking that we are experiencing an epidemic of a specific influenza strain, even though we were having an unusually mild flu season. It just happens that the year’s dominant strain is a relatively new mutation and there had not been many inflections with this particular strain in previous years, even though influenza is almost always more common and more harmful than in the present year.

FDA’s carve-out logic also means we are also experiencing an epidemic of teens smoking Marlboros that were manufactured in 2018, even though smoking is way down. I mean, no one was smoking those just a few years ago, and now they are, like, everywhere! Something must be done!

In short, it makes no sense to talk about an epidemic of a single option within a category of competing diseases/products. The entire category should be considered. (Notice the trap here for the crowd who seeks to die on the hill of “e-cigarettes are not in the tobacco products category.” This is a way you could actually die on that hill.)

The final problem with the use of “epidemic” — even ignoring for the inappropriate strained metaphor, the full-on falsity of the WHO’s version the claim, and the misleading tricks behind FDA’s usage — is more subtle. It is a question of what counts as a population.

An epidemic occurs when there is a spike in cases, across time, within a particular population. We do not say that Congo has an epidemic of malaria because they have a much higher incidence rate than Canada, or vice versa for frostbite. The word someone is probably looking for there is “endemic”. We only say “epidemic” if there is an increase in the numbers within the country. However, this is not about the place, but the group of people. So who constitutes the population, the group of people to compare over time, for tobacco product use?

Unlike influenza, tobacco product use is all about cohort replacement. That is, flu incidence changes from year to year because a different portion of the (mostly) same population get the disease. By contrast, population smoking prevalence changes mostly because the new cohort that is being added to the count (e.g., those turning 18 that year) has a different prevalence than those dying that year. Yes, there is smoking uptake by 19- to 25-year-olds (though that is still really a matter of cohort replacement, and would be clearly that if we took those FDA et al. like to call “youth” out of the adult population and looked only at prevalence for age 26+). Yes, there is also quitting at all ages. But year-to-year changes are driven by an entirely different engine as compared to infections sweeping through a population.

Notice that I have had to distinguish incidence (rate of new cases occurring) from prevalence (portion of the population who have the disease/behavior), a distinction that seems to baffle tobacco controllers even though it is first-semester public health. They can never decide which one they consider to be important. Sometimes they whine about rate of trialing (incidence of first-trying a product). Sometimes they whine about ever-use (prevalence of ever having ever trialed the product; this, of course, can only increase for a cohort over time), without seeming to understand the difference. Currently FDA seems to be making their “epidemic” claims about “used at least once in the last 30 days” prevalence.

Among their and WHO’s more subtle crimes against the word “epidemic” is that that word refers to spikes in incidence rates, not in prevalence. Consider that the prevalence of HIV is far higher now than ever before (thanks to maintenance treatments that let people live with it). Is this an epidemic? Similarly, the population prevalence of HPV-16 will peak when the vaccines become sufficiently widely used (and thus in future years the new cohorts are immune while some of those with the virus are dying off). So does this mean that the epidemic will be at its height at a time when the incidence rate is hitting its lowest point since the start of the sexual revolution? The innumerate use of the word that FDA is employing would say exactly that.

But getting back to the question of populations, the problem is more subtle still. Consider the population “Americans born in 2000”. When looking for epidemic-level increases, can we look at that cohort’s own incidence rate of vaping or even (ignoring the fatal problem noted in the previous paragraph) prevalence of recent usage? That obviously does not work, because of course those will be higher in last year’s statistics than they were in recent years when that population were little kids. That is like saying we are experiencing a huge increase in knowledge of simple calculus, because so many more in that cohort know it now than a few years ago. Is someone also going to whine that there is currently an epidemic of premarital sex among Americans born in 2000? (We should certainly hope there is!)

Even though this is really the only comparison that makes sense for properly using the word epidemic, it is obviously dysfunctional and so is not the comparison that gets made. Instead, incidence rates (or, more likely, prevalences) are compared, year-to-year, among 17-year-olds (or for whatever age cohort). But this is not a comparison within a population. It is, in fact, an entirely disjoint population, like comparing Congo and Canada; those who are 17 on a particular date in 2017 include no one from the population who was 17 on that date in 2016. So you can say, e.g., that vaping among this year’s 17-year-olds is much higher than among last year’s (just as you can say malaria is much more common among Congolese than Canadians), but that is not an epidemic. That is cohort replacement.

I realize this is subtle, and its importance is probably lost on many readers. But believe me when I say for anyone literate in population health science, it stands out as a far larger error in the use of the term than the simple fact that vaping is not a disease, or even the “looking at only one line in the ledger” game.

To summarize how I believe we should respond to these innumerate “epidemic” claims: First, we should push back against the use of the word to refer to a behavior rather than a disease. This is not, however, because of some naive language purity urge, a failure to recognize that words get used metaphorically. Rather, it is because this usage is part of tobacco controllers’ game of trying to define the behavior as a disease. They are not actually trying to expand the definition of “epidemic” here, but that of “disease”, and we should push back.

Second, the strongest substantive replies are as follows: The FDA version of the claim is based on carving out one particular product, which is taking away market share from other products (which, oh by the way, are a hundred times worse for you). Always go back to the entire category, and perhaps consider noting the analogy of “Marlboros manufactured in 2018”, whose usage is up by infinity percent. As for the WHO version of the claim, it is simply factually false.

Third, the FDA version of the claim is actually something worse than false: it is nonsense. It is one thing to say something that could be true but happens to be false. It is another to utter a string of words that simply make no sense. There cannot be an epidemic of anything among 17-year-olds, based on year-to-year comparisons, because these are entirely disjoint populations. Again, you may have to take my word for it, but this is actually the clearest misuse of the word in this entire embarrassing mess. The supposed health scientists at FDA should understand this, though I expect they do not understand and are clearly immune to embarrassment.

Science lesson: The absurdity of “n deaths per year” and “leading preventable cause” claims about smoking.

by Carl V Phillips

Smoking is quite harmful. Lots of people choose to do it. Given these facts, you would think that people who warn/scold/fret about smoking, at the individual or population level, would see no reason to exaggerate. Yet they do. They lie constantly and habitually. Still, in spite of the lying, you might think that they would avoid making mantras of claims that are simply nonsense. Yet they do not.

I have covered most of this before, highlighting some of it as one of the six impossible things tobacco controllers believe, but have not pulled it all together before.

Consider first claims like “smoking causes 483,456.7 deaths per year in the U.S.” What does this even mean? It obviously does not mean what it literally says, that but for smoking, these individuals would not have died. Occasionally someone asserting these figures phrases the claim in a way that highlights the implicit suggestion of immortality, and is rightly ridiculed for it. But in fact, even the standard phrasing implies this if treated as natural language.

Understanding what this might(!) really mean requires understanding the epidemiology definition of causing a death, which, it is safe to say, few of those reciting the claims about smoking understand. This definition is actually, like much of epidemiology, fundamentally flawed, but it gets us closer to something meaningful. The textbook definition is that something is a cause of death if it made the death occur earlier than it otherwise would have. Notice that this means that every death (like every event) has countless causes. E.g., a particular death may have been caused (in this sense of it occurring when it did and not later) by all of: smoking, being born male, not eating perfectly, occupational exposures, and choosing a low-quality physician. (Notice that if we extend to a broader definition of causation, other causes include the evolution of life on Earth and the individual’s grandfather making it home from the war.)

This typical version of the definition is fairly useless because it includes exposures that caused the death to occur only a few seconds sooner than it would have. We are seldom interested in those. Indeed, by that definition, smoking is a cause of death for almost every smoker and former smoker. It is very likely that any smoker who who is not killed instantly by trauma would have survived longer because whatever disease killed her would have developed more slowly, or simply because the body would have functioned for a few more minutes. So more useful definitions of a cause of death would be something that we estimate caused the death to occur a month, or a year, or five years earlier than it would have. Note that a far more useful measure, in light of these problems, is “years of potential life lost” (YPLL).

So which of those definitions is the “X deaths per year” claim based on, given that it is clearly neither the literal meaning (with its implication of immortality) nor the faulty textbook epidemiology definition (which would include approximately all deaths among ever-smokers)? The answer is: none of them. Those statistics are actually a toting up of deaths attributed to a particular list of diseases, each multiplied by an estimate of the portion of those cases that were caused by smoking, in historical U.S. populations. That is, it is the number of lung cancer deaths among smokers, multiplied by the portion of such deaths that are attributed to smoking, plus the number among former smokers multiplied by the attributable fraction for former smokers, plus those for heart attacks, plus those for a few dozen other specific declared causes of death.

As you might guess, based on who is doing the toting, these numbers are biased upwards in various ways. Still, it would be possible to estimate that sum honestly (no one has tried to do so for a few decades, but it would be possible). But the resulting measure would obviously not properly be described “deaths caused by smoking.” It would not be that hard to identify what the figure really is, especially in serious written material like research papers or government statements: “each year in the U.S. smoking is estimated by the CDC to cause X fatal cases among 29 diseases.” Of course, most “researchers” and “experts” in the field do not even know this is what they are trying to say.

There are also several problems with the numbers themselves, not just the phrasing. First there is the noted, um, shading upward of the numbers. Second, as I alluded to in the third paragraph, the statistic is always presented with too much precision. Even two significant digits (e.g., 480,000) is too much precision. The estimates of the smoking-attributable fraction of cases of those diseases are not precise within tens of percent for smokers, let alone former smokers, a much more heterogeneous category, at best, making even one significant digit (e.g., 400,000) an overstatement of the precision.

Third, and more important for most versions of the statistic is that “in historical U.S. populations” bit. The statistics you seen for other countries or the whole world are based on implicit assumptions that everyone share Americans’ health status and mix of exposures, because almost all the estimates come from U.S. studies (and those in the mix that do not are almost all from the countries that are most similar to the U.S.). At best, the estimated increase in risk for fatal cases of the disease ported to calculation for other populations, even though this varies across populations. That is, it is assumed that if the estimate is that half of all heart attacks among ever-smokers are caused by smoking, then that same multiplier is applied to heart attacks among ever-smokers in the other population. Worse, sometimes the attributable fraction itself is just ported, so if a quarter of all heart attacks in the U.S. are attributed to smoking, then that multiplier is applied to all heart attacks in other populations. That would mean, e.g., if a particular population has a lot of extra cases of cancers due to diet, the same fraction of those cancers that is due to smoking in the U.S. is attributed to smoking there.

Fourth, and worse still, the forward-looking versions of the statistics would be innumerate nonsense even if none of the other problems existed. These include the infamous prediction of a billion deaths from smoking in the 21st century, as well as assertions about the fate of cohorts who are taking up smoking now. The number of deaths from a list of diseases that are attributable to smoking is going to vary hugely not just across populations, but time. This is first-week Epidemiology 101 stuff. Population and time matter. There are no constants in epidemiology. The number of deaths from particular diseases will vary with technology. The attributable fraction will vary with the prevalence of other risk factors. Oh, and for those other changes, good news often makes things “worse”: An asteroid destroys higher life on Earth, and smoking stops causing any deaths. War, hunger, and infections are reduced and smoking causes a lot more cases of fatal diseases.

In summary, these statistics are: (a) not actually the number of deaths caused by smoking, (b) exaggerated, (c) far less precise than claimed, even setting aside the intentional bias, (d) only valid for a few populations, and (e) only applicable to the present (or, really, the recent past).

Moving on to the “leading preventable cause of death” claims, this mantra is equally absurd if you pause to actually look at the words. What does “preventable” mean? Typically in such contexts, it means “some obvious top-down action could have averted it.” So, for example, of the 3000 deaths from Hurricane Maria, a few score were hard to do much about. Every one of these was “preventable” in some sense (fly the particular person to Miami in advance of the storm) but this is meaningless; preventing someone, probably a few dozen someones, from getting killed was not a real option. But the vast majority of those deaths were meaningfully preventable — in the sense that an operationalizable action could have kept them from happening — with a competent relief operation.

So if this normal use of the word is what tobacco controllers mean when they recite this mantra, then they are basically testifying that they are horrifically incompetent. They spend their lives trying to prevent this from happening, and they fail even though it is doable. But while it is true that they are generally horrifically incompetent at what they do, it is clearly not doable. Smoking is not preventable by this standard sense of the word.

Perhaps they are saying it is theoretically preventable, in that sense, but no one has figured out how to do it. At least that is plausible, but then the full statement is clearly false. There are more important causes of death that are theoretically preventable. Deterioration with age of cellular repair mechanisms seems to pretty clearly top the list. Humanity will figure out how to largely prevent that. This bit of prevention (in the “we will figure it out eventually” sense) dwarfs preventing the deaths by smoking. Indeed, it will prevent a lot of the fatal disease cases that are caused by smoking. (I have a vision of one of my kids find this post in an archive 200 years from now, and being sad that this technology came a few decades too late for me. And for most of you too — sorry.)

Most likely, what they are not-quite-saying is that each individual who “dies from smoking” (i.e., has a fatal case of a disease that was caused by smoking) could have made a choice to not have that happen. In some sense, this suffers from the same problem that such a claim about hurricanes or earthquakes does: Yes, every death from a collapsed building could have been prevented by the person choosing to be in a different building. But it has a bit more legitimacy since it is obvious what the safer choice is and the risk is high enough probability to influence the decision. The problem here is to make this meaningful statement, tobacco controllers would have to acknowledge that smoking and other tobacco product use is an individual choice. They are not willing to say that out loud — and thus admit that their entire enterprise is devoted to keeping people from making the choices they want — so they hide it behind weasel words like “preventable”.

But just because the statement “the leading cause of death among individual behavioral choices” is meaningful does not mean it is right. Indeed, it is obviously wrong. Go back to the epidemiology textbook definition of a cause of death. Smoking is a cause of death, by that definition, for approximately everyone who smokes. But eating a less-than-optimal diet is, for the same reason, a cause of death for everyone who eats less than optimally. Two or three times as many deaths occur among people who ate less than optimally (i.e., basically everyone), as compared to those who smoked, so smoking is clearly not “leading”. Of course, no one really thinks in terms of that textbook definition. So how about if we limit it to deaths that occurred a year earlier than they would have. It is pretty difficult to imagine figuring out the numbers, but I would expect diet still has the edge. How about five years? At that level, smoking might really be leading. How about putting it in terms of YPLLs? Yes, it is probably true that smoking costs more YPLL than any other individual choice.

Aha, so they are right!

Um, yeah. We just have to assume that these stupid phrases really represent deep and subtle thinking on the part of those using them. By “preventable” they actually mean resulting from individuals’ behavioral choices. By “cause of death” they actually mean cause of YPLLs. And their declaration that it is true, rather than speculation, is based on valid estimates of the comparative number of YPLLs from different behavioral choices, even though they never cite such evidence. Also, by “n deaths” they mean “n cases of particular fatal diseases attributed to smoking, if you believe our numbers, and assuming that future looks exactly like the past and all population are like the U.S.” Giving someone the benefit of the doubt is sometimes noble, but it would just be silly in this case.

The bottom line are that these mantras are just as false as much of the rest of what tobacco controllers claim. Moreover, they are not just factually wrong, but are a demonstration of just how thinking-free the whole endeavor is. At least things like “second-hand smoking causes 30% of all heart attacks” or “vaping is causing more kids to take up smoking” are meaningful claims. They are obviously false, but they are valid hypotheses and are only false because empirical evidence shows they are false, not because it is impossible for them to be true based on some simple fundamentals of how we know the world works.

Sure, people say things all the time such that, if anyone paused to think and ask the question, would not stand up to a “what does that even mean?” query. We are not always precise in all our thinking, let alone how it translates into words. But the claims in question are not fleeting thoughts or ad hoc word choices. They are mantras that getting written or said a thousand times per day by supposedly credible people in supposed credible contexts. The fact that they cannot pass a “what does that even mean?” test is one of the greatest overlooked testaments to the fundamental lack of seriousness in public health. The fact that they get repeated by others is a testament to how influential sloppy public health thinking is, even over those who are attempting to position themselves as opponents of it.

A subtle tobacco control self-contradiction lie, re FDA pumping cigarette stock prices

by Carl V Phillips

Tobacco controllers contradict themselves all the time. That is the inevitable result of them saying whatever seems expedient at the time, without any concern for whether the evidence supports it, or even even flatly contradicts it. When someone is sociopathic enough to do this (*cough* Trump *cough*), they will not only contradict the evidence, but (unless they have incredible discipline and intelligence, which they do not) also inevitably contradict themselves. Many of tobacco control’s self-contradictions are quite simple, with patently contradictory statements appearing in the same document, or even the same paragraph. It hardly seems worth searching out the contradictions that require analysis and observations across multiple threads. But this one is kind of interesting.

A classic tobacco control trope, which you still see a fair bit, is that tobacco companies have to recruit new generations of smokers to replace their current customers. Most of those reciting this probably actually believe it, which reflects public health people’s fundamental lack of awareness about how the world works. Anyone familiar with business (and I mean at just the level of reading the newspaper) will know that markets these days hardly look beyond the next quarter’s earnings. There is not the slightest interest in future generations, and barely any interest in two years from now. C-suite executives respond mainly to these very-short-term incentives. Even stakeholders in the company – medium- to high-level employees who are planning on working there for another couple of decades — do not care about selling products to future generations.

Similarly, an economic theory or long-term shareholder perspective says companies should not care about future generations. Even the most modest discounting of future profits makes sales to upcoming generations approximately worthless in present value terms. Shareholders would rather the companies buy back shares rather than investing in “recruiting” future smokers. If you believe in an anthropomorphic view that cigarette companies “want” to stay in business – rather than making the economically rational choice of maximizing profits as far as they go and sunsetting – then they would be better off expanding horizontally into other logistics businesses (including other tobacco products) rather than worrying about whether anyone is smoking in 2060.

But let’s assume that tobacco controllers are sufficiently innumerate about business that they actually believe that companies “need” to recruit new generations. Then it cannot possibly be that they also believe this:

FDA’s attacks on the vapor product industry drove up cigarette company stock prices because they portend less competition for cigarettes over the time horizon that actually matters to the markets, a couple of years. The market cap increase reflects the expectation that the companies will be able to sell cigarettes for a higher per-unit profit and sell more cigarettes (the former is more important in terms of profits — another simple market fact that tobacco controllers do not understand — but the latter is what matters most in terms of social impacts).

Here’s the thing: If someone believes that the companies (i.e., their shareholders) actually care about selling to future generations, as they have claimed for decades, and believes that vaping will cause future generations to smoke, as they claim in this tweet and frequently, they they would have to predict that threatening to shut down the vapor product industry would depress share prices, or at least not send them through the roof.

As I said, tobacco control lies of self-contradiction are typically so blatant that there is no reason we have to dig this deeply. I certainly do not want to give tobacco controllers too much credit, by implying that they ever actually assess their hypotheses against the evidence. Still, it does not hurt to run through the scientific implications of what they say to illustrate the layers of their dishonesty.

Let’s try to get our criticisms right, shall we? (More on the recent “vaping causes heart attack” study)

by Carl V Phillips

Sigh. We are supposed to be the honest and scientific ones in the tobacco wars. But we won’t be if we are not, well, scientific. Case in point are the criticisms of the recent paper with Glantz’s name on it that has been erroneously said to suggest that vaping doubles the risk of heart attack.

Incidentally, the meaningless statistic in the paper is a RR of 1.8, which is not double. Also, when the paper was originally written as a student class project (not by science students, mind you, but by medical students), that statistic was 1.4. That was when Glantz heard about it, managed to get the kids to put his name on the paper, and taught them how to better cook their numbers. That “contribution” has him being called the lead author.

The paper is junk science. So are most of the criticisms of it. If only someone with expertise in these methods had written a critique of it that people could look to. Oh, wait, here’s one in The Daily Vaper from February. That was based on a poster version of the paper, but as I noted in the article, “It has not yet appeared in a peer-reviewed journal, but it will, and the peer-review process will do nothing to correct the errors noted here.” I wish I could claim this was an impressive prediction, but it is about the same as predicting in February that the sun will rise in August.

You can go read that if you just want a quick criticism of the paper, and also look at the criticism on this page of some hilarious innumeracy Glantz piled on top of it. In the present post I am mostly criticizing the bad criticisms, though at the end I go into more depth about the flaws in the paper.

About half the critiques I have seen say something along the lines of “it was a cross-sectional study, and therefore it is impossible to know whether the heart attacks occurred before or after someone started vaping.” No. No no no no no. This is ludicrous.

Yes, the data was from a cross-sectional survey (the 2014 and 2016 waves of NHIS, mysteriously skipping 2015). And, yes, we do not know the relative timing (as discussed below). But “therefore it is impossible to know” (or other words along those lines)? Come on. A cross-sectional survey is perfectly capable of measuring the order of past events. Almost every single cross-sectional survey gives us a pretty good measure of, for example, whether someone’s political views were formed before or after the end of the Cold War. Wait! what kind of wizardry is this? How can such a thing be known if we do not have a cohort to follow? Oh, yeah, we ask them their age or what year they were born. Easy peasy.

Almost every statistic you see about average age of first doing something — a measure of the order in which events occurred (e.g., that currently more Americans become smokers after turning 18 than before, but most extant smokers started before they were 18) — is based on cross-sectional surveys that ask retrospective questions. It is perfectly easy to do a survey that asks heart attack victims the order in which events occurred. Indeed, any competent survey designed to investigate the relationship in question would ask current age, age of smoking initiation and quitting, age of vaping initiation and quitting, and age at the time of heart attack(s), ideally drilling down to whether smoking cessation was just before or just after the heart attack if they occurred the same year. We would then know a lot more than the mere order. But NHIS does not do that because, as I noted in the DV article, it is a mile wide and an inch deep. It is good for a lot of things, but useless for investigating this question. It can be used, as it was here, for a cute classroom exercise to show you learned how to run (not understand, but run) the statistical software from class. But only an idiot would think this paltry data was useful for estimating the effect.

(A variation on these “therefore it is impossible” claims is the assertion that because it is a cross-sectional study, it can only show correlation and not causation. I am so sick of debunking that particular bit of epistemic nonsense that I am not even going to bother with it here.)

So, we do not know the order of events. We can be confident that almost all the smokers or former smokers who had heart attacks smoked before that event. We do not know whether subjects quit smoking and/or started vaping before their heart attacks. Given that vaping was a relatively new thing at the time of the surveys, whereas heart attacks were not, it seems likely that most of the heart attacks among vapers occurred before they started vaping. This creates a lot of noise in the data.

A second, and seemingly more common, erroneous criticism of the analysis is that this noise has a predictable direction: “Smokers had heart attacks and then, desperate to quit smoking following that event, switched to vaping, thereby creating the association.” Again, no no no. Heart attacks do cause some smokers to become former smokers, but there is little reason to believe they are much more likely than other former smokers to have switched to vaping. Some people will have heart attacks and quit smoking unaided or using some other method. Indeed, I am pretty sure (not going to look it up, though because it is not crucial) that most living Americans who have ever had a heart attack experienced that event before vaping became a thing. So if they quit smoking as a result of the event, they did not switch to vaping. Also it seems plausible that the focusing event of a heart attack makes unaided quitting more likely than average, as well as making “getting completely clean” more appealing.

Of course, an analysis of whether behavior X causes event Y should not be based on data that includes many Y that occurred before X started. That much is obviously true. NHIS data is not even a little bit useful here, which is the major problem. There is so much noise from the heart attacks that happened before vaping this that the association in the data is utterly meaningless for assessing causation.

But there is no good reason to assume that this noise biases the result in a particular direction. If asked to guess the direction of the bias it creates, a priori, I probably would go in the other direction (less vaping among those who had heart attacks compared to other former smokers). The main reason we have to believe that the overall bias went in a particular direction is that the result shows an association that is not plausibly causal. We know the direction of the net bias. But this is not the same as saying we had an a priori reason to believe this particular bit of noise would create bias in a particular direction. When we see a tracking poll with results that are substantially out of line with previous results, it is reasonable to guess that random sampling error pushed the result in a particular direction. But we only conclude that based on the result; there was not an a priori reason to predict random sampling error would go in a particular direction.

Moreover, we do not have any reason to believe that the net bias was caused by this particular error, because it has a rather more obvious source (see below).

Sometimes we do have an a priori reason to predict the direction of bias caused by similar flaws in the data, as with the previous Glantz paper with an immortal person-time error (explained here, with a link back to my critique of the paper). If the medical students had engaged in a similar abuse of NHIS data to compare the risks of heart attack for current versus former smoking, then the direction of bias would be obvious: Heart attacks cause people to become former smokers, which would make former smoking look worse than it is compared to current smoking. I suspect that people who are making the error of assuming the direction of bias from the “Y before X” noise are invoking some vague intuition of this observation. They then mistranslate it into thinking that former smokers who had a heart attack are more likely to be vapers than other former smokers.

This brings up a serious flaw in the analysis that I did not have space to go into in my DV article: The analysis is not just of former smokers who vape, but includes people who both smoke and vape, as well as the small (though surprisingly large) number of never-smokers who vape. If vaping does cause heart attacks, it would almost certainly do so to a different degree in each of these three groups. For reasons I explored in the previous post, different combinations of behaviors have different effects on the risk of an outcome. Vaping probably is protective against heart attack in current smokers because they smoke less than they would on average. If a smoker vapes in addition to how much she would have smoked anyway, the increased risk from adding vaping to the smoking is almost certainly less than the (hypothesized) increased risk from vaping alone. Whatever it is about vaping that increases the risk (again, hypothetically), the smoking is already doing that. Thus any effect from adding vaping to smoking would be small compared to the effect from vaping compared to not using either product. Most likely the effect on current smokers would be nonexistent or even protective.

Indeed, this is so predictable that if you did a proper study of this topic (using data about heart attacks among vapers, rather than vaping among people who sometime in the past had a heart attack; also with a decent measure of smoking intensity — see below), and your results showed a substantial risk increase from vaping among current smokers, it would be a reason to dismiss whatever result appeared for former smokers. This is especially true if the estimated effect was substantial in comparison to the estimate for former- or never-smokers. If you stopped to think, you would realize that your instrument produced an implausible result, and thus it would be fairly stupid to believe it got everything else right. This is a key part of scientific hypothesis testing. Of course, such real science is not part of the public health research methodology. Nor is stopping to think.

It is a safe bet that the students who did this analysis understand none of that, having never studied how to do science and lacking subject-matter expertise. Glantz and the reviewers and editors of American Journal of Preventive Medicine neither understand nor care about using fatally flawed methods. So the analysis just “controls for” current and former smoking status as a covariate rather than separating out the different smoking groups as it clearly should. This embeds the unstated — and obviously false — assumption that the effect of vaping is the same for current, former, and never smokers. Indeed, because “the same” in this case means the same multiplicative effect, it actually assumes that the effect for current smokers is higher than that for former smokers (because their baseline risk is higher and this larger risk is being multiplied by the same factor).

Though they did not stratify the analysis properly, it is fairly apparent their results fail the hypothesis test. The estimate is driven by the majority of vapers in the sample who are current smokers, so they must have had a substantially greater history of heart attacks.

There is a good a priori reason to expect this upward bias, as I noted in the DV article, but it is not the reason voiced in most of the critiques. It is because historically vapers had smoked longer and more than the average ever-smoker. This is changing as vaping becomes a typical method for quitting smoking, or a normal way to cut down to having just a couple of real cigarettes per day as a treat, rather than a weird desperate attempt to quit smoking after every other method has failed. Eventually the former-smoking vaper population might look just like the average former-smoker population, with lots of people who smoked lightly for a few years and quit at age 25, and so on. But in the data that was used, the vapers undoubtedly smoked more than average and so were more likely to have a heart attack (before or after they started vaping).

Controlling for smoking using only “current, former, never” is never adequate if the exposure of interest is associated with smoking history and smoking causes the outcome, both of which are obviously true here. If there are no such associations then there is no reason to control for smoking, of course. Thus basically any time you see those variables in a model, you can be pretty sure there is some uncontrolled confounding due to unmeasured smoking intensity. In this case, you can be pretty sure that its effect is large and it biases the association upward.

In short, the results are clearly invalid. There are slam-dunk criticisms that make this clear. So let’s try to stick to those rather than offering criticisms that are as bad as the analysis itself. Ok?

Sunday Science Lesson: Debunking the claim that only 16,000 smokers switched to vaping (England, 2014)

by Carl V Phillips

When this journal letter (i.e., short paper), “Estimating the population impact of e-cigarettes on smoking cessation in England” by Robert West, Lion Shahab, and Jamie Brown came out last year, most of us said “wait, wot?” The authors estimated that in 2014, about 16,000 English smokers became ex-smokers because of e-cigarettes (a secondary analysis offered 22,000 as an alternative estimate). But that year saw an increase of about 160,000 ex-smokers who were vapers in the UK (the year-over-year increase for 2015 versus 2014) according to official statistics. In addition, there were about 170,000 more ex-smokers who identified as former vapers. Since the latter number subtracts from the number of ex-smokers who are vapers in 2015 they need to be added back. So it appears that the year-over-year increase in English ever-vapers among ex-smokers appears to be nearly 200,000, after roughly adjusting for the different populations (England is 80% of the UK population). Thus West et al. are claiming, in effect, that the vast majority of people who went from smoking to vaping did not quit smoking because of vaping.

My calculation is rough, and for several reasons it may be a bit high (e.g., the measured points in 2015 and 2014 demarcate a year that falls slightly later in calendar time than 2014 itself, and the rate of vaping initiation was increasing over time). But we are still talking about well over 100,000 new ex-smoker vapers. Probably closer to 200,000. So this would mean that about 90% of new ex-smoker vapers either would have quit smoking that year even without vaping, had quit tobacco entirely and only later took up vaping, or are not “real quitters” (i.e., they were destined to start smoking again before they would “count” as having quit, which is not a well-defined definition, but the authors seem to use one year as the cutoff). This seems rather implausible, to say the least.

This is an extraordinary claim on its face given what we know about the advantages of quitting by switching, and more so given that more detailed surveys of vapers (example) show almost all respondents believe they would still be smoking had they not found e-cigarettes. It must be noted that most respondents to those surveys are self-selected vaping enthusiasts who differ from the average new vaper, and that a few of them might be wrong and would have quit anyway. But the disconnect is still far too great for West’s weak analysis (really, assumptions) to come close to explaining.

I never bothered to comment on the paper at the time it came out [beyond a minor mention here] because the methodology was so weak and the result so implausible that I did not think anyone would take it seriously. But the tobacco wars seldom meet a bit of junk science they do not like. In this case, Clive Bates asked me to examine the claim (and contributed some suggestions on this analysis and post) because some tobacco controllers have taken to saying “e-cigarettes only caused only 16,000 people to quit smoking in England! so we should just prohibit people from using them!”

The proper responses to this absurd assessment and demand, in order of importance, are:

  1. It would not matter if they caused no one to quit smoking. It is a violation of the most fundamental human rights to use police powers to prohibit people from vaping if they want to. People have a right to decide what to do with their bodies. Moreover, in this particular case, you cannot even make the usual drug war claims that users of the product are driven out of their minds and do not understand the risks and the horrible path they will be drawn down: Vaping is approximately harmless, most people overestimate the risks, and it leads to no horrible path. It is outlandish — frankly, evil — to presume unto oneself the authority to deny people this choice.
  2. But even if you do not care about human rights and only care about health outcomes or whatever “public health” people claim to care about, causing a “mere” 16,000 English smokers to quit, annually, is quite the accomplishment. There is no plausible basis for claiming any recent tobacco control policy has done as much. Since there is no measurable downside, this is still a positive. Also, the rate of switching probably could be increased further with sensible policies and truthful communication of relative risks.
  3. The rough back-of-the-envelope approach used in the paper could never provide a precise point estimate even if the inputs were optimally chosen. But the inputs were not well chosen. The analysis included errors that led to a clear underestimate. When a back-of-the-envelope result contradicts a reality check, we should assume that reality got it right.

So I am taking up here what is really a tertiary point.

Back of the envelope calculations

West et al. carried out a back-of-the-envelope calculation, a simple calculation based on convenient approximations that is intended to produce a quick rough estimate. It happens to have glaring errors, but I will come back to those. Crude back-of-the-envelope calculations have real value policy analysis. I taught students this for years. In my experience, when there is a “debate” about the comparative costs and benefits of a policy proposal, at least half the time a quick simple calculation show that one is greater than the other by an order of magnitude. The simple estimate can illustrate that the debate is purely a result of hidden agendas or profound ignorance, and also eliminate the waste of unnecessary efforts to make precise calculations.

When doing such an analysis, it is ideal if you get the same result even if you make every possible error as “conservative” as is plausible (i.e., in the direction that favors the losing side of the comparison). West’s analysis would thus be useful if it were presented as follows: “Some people suggest that the health cost from vaping experienced by new vapers outweighs the reduction in the health cost from smoking cessation that vaping causes. Even if we assume that vaping is 3% as harmful as smoking, the total health risk of additional vapers (the annual increase) would be the order of equivalent of the risk for about 5000 smokers. Our extremely conservative calculation yields in the order of 20,000 smokers quitting as a result of vaping. So even with extreme assumptions, the net health effect is clearly positive.”

But the authors did not claim to be offering an extremely conservative underestimate for purposes of doing such a calculation. They implicitly claimed to be providing a viable point estimate. And that requires a more robust analysis rather than rough-cuts, and best point estimates rather than worst-case scenarios. It also requires a reality check about what would have to be true if the ultimate estimate were true, namely that almost everyone who switched from smoking to vaping did not stop smoking because of vaping.

West’s estimation based on self-identified quit attempts

The crux of their calculation is the following: Their surveys estimate that 900,000 smokers self-identify as having attempted to quit smoking using e-cigarettes (please read this and similar statistics with an implicit “in this population, during this period” and I will stop interjecting it). They then assume that 2.5% of them actually did quit smoking because of e-cigarettes.

Where does the 2.5% come from? It is cited to, and seems to be based mainly on, the results of the clinical trials where some smokers were assigned to try a particular regimen of e-cigarettes; the 2.5% is an estimate of the rate at which they quit smoking above those assigned to a different protocol.

Before addressing the problems with using trial results, the second paper they cite as a basis for the 2.5% figure is one by their research group. How they got from that paper’s results to 2.5% is unfathomable. That paper was a retrospective study of people who had tried to quit smoking using various methods and found that those reporting using e-cigarettes were successful about 20% of the time, which beat out the two alternatives (unaided and NRT) by 5 and 10 percentage points. If they had used ~20% instead of ~2% their final result would have been up in the range that would have passed the reality check. So what were they thinking?

I cannot be certain, but am pretty sure. It appears they only looked at differences in cessation rates and not the absolute rates, so the 5 or 10 rather than the full 20. Several things they wrote make it clear this is how they were thinking. This is one of several fatal flaws in their analysis. There are two main pathways via which e-cigarettes can cause someone to quit smoking (which means it would not have happened without them): E-cigarette use can cause a quit attempt to be successful when that same quit attempt would not have otherwise been successful, or it can cause a quit attempt (ultimately successful) that would not have otherwise happened. West et al. are pretty clearly assuming that the second of these never happens. I am guessing that the authors did not even understand they were making a huge — and clearly incorrect — assumption here.

Causing quit attempts is a large portion of cases where e-cigarettes caused smoking cessation. Indeed in my CASAA survey of vapers (not representative of all vapers, but a starting point), 11% of the respondents were “accidental quitters”, smokers who were not even actively pursuing smoking cessation, but who tried e-cigarettes and were so enamoured that they switched anyway. Add to these the smokers who had vague intentions of quitting but only made a concerted effort thanks to e-cigarettes and probably about half of all quit attempts using e-cigarettes do not replace a quit attempt using another method. So if half the 900,000 made the quit attempt because of e-cigarettes and 20% succeeded, we have, right there, a number that is consistent with the reality check I proposed.

Of course they did not use that 20%, and it does seem too high. What they did was assume that 5% would have succeeded in an unaided quit attempt without e-cigarettes — and all the same people would have made that attempt — and so 7.5% (5%+2.5%) actually succeeded when using e-cigarettes. But if half never would have made that attempt then a full 7.5% of them should be counted as being caused to quit by e-cigarettes, which more than doubles the final result (“more than” because their final subtraction, below, would not double but should actually be reduced).

As for why they did not use that 20%, I suspect (though they do not say) that when looking at the numbers from that paper, West et al. focused not only on the differences (the error I just discussed) but on the “adjusted” rates of how much more effective e-cigarettes were than the other methods, which were considerably lower than the numbers I quoted from the paper above. This too is an error. Public health researchers think of “adjusting” (attempting to control for confounding) as something you just do, a magical ritual that always makes your result better. This perception is false for many reasons, but a particularly glaring one in this case: The adjusted number is basically the measure of how helpful e-cigarettes would have been, on average, if those who tried to switch to them had the same demographics as smokers using other cessation methods. Smokers who try to switch to e-cigarettes have demographics that predict they are more likely to succeed in switching than the average smoker. Of course they do! People know themselves (a fact that seems to elude public health researchers). The ones who tried switching were who they were; they were not a random cross-section of smokers. So it seems that West et al. effectively said “pretend that instead of self-selecting for greater average success, those who tried to switch were chosen at random, and instead of using the success rate for the people who actually made that choice, we will use instead the number that would have been true if they were random.”

[Caveat: The attempt to control for confounding could also correct for the switchers having characteristics that make them more likely to succeed in quitting no matter what method they tried. So some of the “adjustment” is valid — only for those who would have tried anyway — but much of it is not.]

Clinical trials

That last point relates closely to the other “evidence” that was cited as a basis for that 2.5% figure, and appears to have dominated it: the clinical trials.

Clinical trials of smoking cessation are useless for measuring real-world effects of particular strategies when they are chosen by free-living people. At best they measure the effects of clinical interventions. But in this case, these rigid protocols are not even a good measure of the effect of real-world clinical interventions in which smoking cessation counselors try to most effectively promote e-cigarettes by meeting people where they are and making adjustments for each individual. I have previously discussed this extensively.

A common criticism that the trials directed subjects toward relatively low-quality e-cigarettes. That is one problem. More important, the trials and did not mimic the social support that would come from, say, a friend who quit smoking using e-cigarettes and is offering advice and guidance. The inflexibility of trials does not resemble the real-world process of trying, learning, improving, asking, and optimizing that real-world decision entail. Clinical trials are designed to measure biological effects (and even then they have problems), not complex consumer choices.

But it is actually even worse than that. A common failing in epidemiology is not having a clue about what survey respondents really mean when they answer questions. There is no validation step in surveys where pilot subjects are given an open-ended debriefing of how they interpreted a question and what they really meant by their answer. (I always do that with my surveys, but I am rather unusual.) So consider what a negative response to “tried to quit smoking with e-cigarettes” really means. If a friend shoved an e-cigarette into a smoker’s hand and said “you should try this”, but she refused to even try it, she would undoubtedly not say she tried to quit smoking with e-cigarettes. But in a clinical trial, if that were her assignment, she would be counted among those who used e-cigarettes to try quitting, thus pulling down the success rate.

If she tried the e-cigarette that was thrust at her, but did not find it promising, chances are that in a survey she would probably not say she tried quitting using e-cigarettes. (She might, but given the lack of any reporting about piloting and validation of these survey instruments, we can only guess how likely that is.) If she passed that first hurdle, of not rejecting e-cigarettes straightaway, but used them sometimes for a few days or weeks, she might or might not say she tried quitting using e-cigarettes. But if she actually quit using e-cigarettes, she would undoubtedly count herself among those who tried to quit using e-cigarettes. I trust you see the problem.

It is the same problem that is common in epidemiology when you read, say, that 20% of the people who got a particular infection died from it. This usually means that 20% of the people who got sick enough from it to present for medical care and get diagnosed died, but countless others had mild or even asymptomatic infections. Everyone in the numerator (died in this case, quit in the case of e-cigarettes) is counted but an unknown and probably very large portion of those in the denominator (got the infection, were encouraged to try an e-cigarette) are not. Clinical trial results are (at best) analogous to the percentage you would get if you did antibody tests in the population to really identify who got the infection. This turns out to be the right way to measure the percentage of infected who die. But then if you the applied that percentage to the portion who presented for medical treatment, you would be underestimating the number of them who would die. That is basically what West et al. did. Their 900,000 are those for whom e-cigarettes seemed promising enough to be worth seriously trying as an alternative, but they applied a rate of success that was (again, at best) a measure of the effect on everyone, including those who did not consider them promising enough to try.

This would be a fatal flaw in West’s approach even if the trials represented optimal e-cigarette interventions, providing many options among optimal products, and the hand-holding that would be offered by a knowledgeable friend, vape shop, or a genuine smoking cessation counseling efforts. They did not, and so underestimated even what they might have been able to measure.

Final step

As a final step, West et al’s approach debits e-cigarettes with an estimated decrease in the use of other smoking cessation methods caused by those who tried e-cigarettes instead. These are the methods that are believed to further increase the cessation rate above the unaided quitting that West debited across the board (the major error discussed above). We can set aside deeper points about whether estimates of the effects of these methods, created almost entirely by people whose careers are devoted to encouraging these methods, are worth anything. West et al. assume that those methods would have had average effectiveness had they been tried by those who instead chose vaping. They also still assume that every switching attempt would have been replaced by another quit attempt in the absence of e-cigarettes, as discussed above. This lowers their estimate from 22,000 to the 16,000. But a large portion of smokers who quit using e-cigarettes do so after trying many or all of those other methods, often repeatedly. Assuming those methods would have often miraculously been successful if tried one more time makes little sense.

As a related point that further illustrates the problems with their previous steps, recall that the 2.5% is their smoking cessation rate in excess of that of those who tried unaided quitting or some equivalently effective protocol. But it seems very likely that the average smoker who tries to switch to e-cigarettes has already had worse success with that other protocol than has the average volunteer for a cessation trial. This is the “I tried everything else, but then I discovered vaping” story. I am aware of no good estimate for this disparity, but if the average smoker who tried to switch were merely 1 percentage point less likely than average to succeed with the other protocol (e.g., because she already knew that it did not work for her), then the multiplier should have been 3.5% (7.5%-4% rather than 7.5%-5%). This is trivial compared to the error of using the incredibly low estimated success rate suggested by the trials in the first place, of course, but that little difference alone would have increased West’s estimate by 40%. This illustrates just how unstable and dependent on hidden assumptions that estimate is, even apart from the major errors.

Returning to the reality check

But lest we get lost in the details, the crux is still that West implicitly concluded that the vast majority of those who switched from smoking to vaping did not quit smoking because of vaping. The authors never reflect on how that could possibly be the case. They do, however, offer an alternative analysis, in what are effectively the footnotes, that gives the illusion of responding to this problem without actually doing so. They write:

The figure of approximately 16,000–22,000 is much lower than the population estimates of e-cigarette users who have stopped smoking (approximately 560,000 in England at the last count, according to the Smoking Toolkit Study). However, the reason for this can be understood from the following….

What follows is even weirder than their main analysis.

West’s “alternative” analysis

They actually start with that 560,000. That is inexplicable since it is possible to estimate the year-over-year change in 2014, as I did, rather than working with the cumulative figure. The 560,000 turns out to be well under half what you get if you add the current vapers and ex-vapers among ex-smokers from the statistics I cite above. So their number already incorporates some unexplained discounting from what appears to be the cumulative number. But since I am baffled by this disconnect, I will just leave this sitting here and proceed to look at what they did with that number.

As far as I can understand from their rather confusing description of their methods here, their first step is to eliminate those who were already vaping by 2014, and thus did not switch in 2014. That makes sense, though it would have been easier to just start with that. When they do this, they leave themselves with 308,000. So they started with something much lower than what you get from the statistics I looked at, and ended up with something that is half-again higher than the rough estimate from those statistics. Um, ok — just going to leave that here too. But the higher starting figure makes it even more difficult for them to explain away the reality check.

Their next step is the only one that seems valid. They estimate that 9% of ex-smokers who became vapers did so sometime after they had already completely quitting smoking, and subtract them. This is plausible. An ex-smoker who is dedicated to never smoking again still might see the appeal of consuming nicotine in a low-risk and smoking-like manner again. (Note that this should be counted as yet another benefit of e-cigarettes, giving those individuals a choice that makes them better off, even though the “public health” types would count it as a cost because they are not being proper suffering abstinents. It might even stop them from returning to smoking.)

Of course, this only makes a small dent. So where does everyone else go? Most of them go here:

It has to be assumed on the basis of the evidence [6, 7] that only a third of e-cigarette users who stopped smoking would not have succeeded had they used no cessation aid

…and here:

It is assumed that, as with other smoking cessation aids, 70% of those recent ex-smokers who use e-cigarettes will relapse to smoking in the long term [11]

This takes them down to 28,000.

Taking the latter 70% first, any limitations in relying on a single source for this estimate (another West paper) are overshadowed by: (a) There is no reason to assume switching to vaping will work as poorly, by this measure, as the over-promising and under-delivering “approved” aids that fail because they do not actually change people’s preferences as promised. Indeed, there is overwhelming evidence to the contrary. (b) Many of those in the population defined by “started vaping that year and were an ex-smoker as of the end of the year” have already experienced a lot of the “long term”. That is, if we simplify to the year being exactly calendar 2014, some people joined that population in December, and thus a (correct, undoubtedly much lower than 70%) estimate of the discounting between “smoking abstinent for a week or two thanks to e-cigarettes” and “abstinent at a year” (a typical measure for “really quitting” as noted above) is appropriate. But some joined the population in January and are already nearly at the long term. On average, they will have been ex-smokers for about six months, and being abstinent at six months is much better predictor of the long run than the statistic they used (which, again, is wrong to apply to vaping). Combining (a) and (b) makes it clear that this is a terrible estimate.

As for the first of those major reductions, references 6 and 7 do not actually provide any reason that “only a third…has to be assumed”. Those are the same references they cite for the 2.5% above. So this is just a reprise of the 2.5% claim, and suffers from the same errors I cited above.

You see what they did there, right? The reality check I offered is “your results imply that 90% of new ex-smoker vapers did not quit because of vaping; can you explain that?” Either anticipating this damning criticism or by accident, they provided their answer: “Yes, we assume — based on nothing that remotely supports the assumption — that a majority of them did not really quit and a majority of those who did would have quit anyway (and 9% were already ex-smokers, and some other bits).”

This step basically sneaks in the same fatal assumptions from their original calculation but is presented as if it offers an independent triangulation that responds to the criticism that their original calculation has implausible implications. Here is a pretty good analogy: Someone measures a length with a ruler that is calibrated wrong by a factor of ten. They are confronted with the fact that a quick glance shows that their result is obviously wrong. So they make a copy of their ruler and “validate” their results with an “alternative” estimation method.

Oh, and at the end of this they knock off another 6000 based using what appears to be double counting, but at this point who really cares?

Conclusions

Their first version of the estimate is driven mainly by their assumption that attempting to switch to vaping is close to useless for helping someone quit smoking compared to unaided quitting, and also that all those who attempted to switch would have tried unaided quitting in the absence of e-cigarettes. There are also other errors. Their second version is based on the “reasoning” that because we have assumed that attempting to switch to vaping is close to useless, it must be that most of those who we have observed actually did switch to vaping must have not really quit smoking because of vaping — and so (surprise!) approximately the same low estimate.

So nowhere do they actually ever address the reality check question:

Seriously? You are claiming that almost everyone who ventured into one of those weird vape shops, who spent hundreds of pounds on e-cigarettes, who endured the learning curve for vaping, who ignored the social pressure to just quit entirely, and who decided to keep putting up with the limitations and scorn they faced as a smoker and would still face as a vaper, that almost all of them were someone who was going to just quit anyway? You are really claiming that almost all of them said, “You know, I think I will just quit buying fags this week — oh, wait, you mean I instead could go to the trouble to learn a new way of quasi-smoking and spend a bunch of money on new stuff and keep doing what I am doing it even though I am really over it and ready to just drop it? Where do I sign up?” Seriously?

Reality. Check. Mate.

For what it is worth, if you asked me to do a back-of-the-envelope estimate for this, I would probably go with something like the following:

There were about 200,000 new vaping ex-smokers. It seems conservative to assume that about half of them quit smoking due to vaping. 100,000. Done.

That is obviously very rough, and the key step is just an educated guess. But an expert educated guess is often far better than fake precision based on obviously absurd numbers that just happen to have appeared in a journal (as a measure of something — in this case, not even the right thing). In this case, it has far better face validity than West et al.’s tortured machinations.

[Update, 4 Oct:

Since this was posted, two other flaws in the West analysis have become apparent. The first come from my Daily Vaper article which was based on the lessons from this, a terse presentation in the many ways in which vaping causes smoking cessation. That is worth reading in its own right if you are interested in this stuff. What occurred to me when writing that was that I was too charitable in just saying “ok fine” about the dropping of all ex-smokers who had become vapers after already quitting smoking. For some of them, taking up vaping caused them to not return to smoking. So a few of them should actually be counted. (One might make the semantic argument that the claim is about how many were caused to quit, not how many were caused to be (i.e., become or remain) ex-smokers, so they really do not count. But it is still worth mentioning.)

The second flaw came up in the comments, thanks to Geoff Vann. He figured out an internal inconsistency in the West approach. Basically, if their base methodology (assumptions, etc.) is applied to their step that removed the established vaping ex-smokers from that 560,000, it turns out that you cannot remove nearly as many as they do remove. You can see the details in the comment thread. Internal inconsistencies are always interesting because even if someone denies the criticisms from external knowledge and analysis — which are really far more damning — they cannot complain about being held to their own rules!

]

What is Tobacco Harm Reduction?

by Carl V Phillips

In response to a couple of recent requests and my schooling of FDA in a recent Twitter thread, it seems time for me to again write a primer on the meaning of tobacco harm reduction (THR). Rather than return to a previous version I have written, I am doing this from scratch. This seems best given the evolution of my thinking and changing circumstances.

The key phrase, of course, is “harm reduction”, with “tobacco” denoting the particular area it is applied to. This is important: THR is not a concept that stands apart from HR. It means “the principles of harm reduction, applied to the use of tobacco and nicotine products, and other products that tend to get lumped in with them” (see my previous post for an explanation of that last bit and some other useful background about the current politics). Indeed, when my university research and education group was trying to decide on a name and URL in 2005, it was far from obvious that this was the right term, and we considered others (e.g., “nicotine harm reduction”). While the first prominent use of “THR” appeared in 2001, it was far from established as a common term. (There is probably some endogeneity here, of course — if we had chosen a different term, that might have ascended instead.) In any case, the key to answering “what is THR” is asking “what is HR” rather than thinking it is something different. Continue reading

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (postscript)

by Carl V Phillips

For completion of this series (with this footnote), the following is what I submitted to FDA. My comment does not yet(?) appear on the public docket as of this writing. But I got a confirmation (confirmation code 1k1-8xfb-dhwh if you want to search for it later). It has a bit of extra content beyond what I already presented. Continue reading