The travesties that are Glantz, epidemiology modeling, and PubMed Commons

by Carl V Phillips

I was asked to rescue from the memory hole a criticism of a Glantz junk paper from a year ago. I originally covered it in this post, though I do not necessarily recommend going back to read it (it is definitely one of my less elegant posts).

I analyzed this paper from Dutra and Glantz, which claimed to assess the effect of e-cigarette availability on youth smoking. What they did would be a cute first-semester stats homework exercise, but is beyond stupid to present it as informative. It is simple to summarize:

Dutra and Glantz took NYTS data for smoking rates among American minors. They fit a linear trend to the decline in the minor smoking prevalence between 2004 and 2009 (the latter being the start of the e-cigarette era, by their assessment; the former presumably being picked from among all candidate years based on which produced the most preferred result, as per the standard Glantz protocol). They then let the slope of the trend change at 2009 and observed that the fit slope was about the same for 2009 to 2014. From this paltry concoction, they concluded e-cigarettes were not contributing to there being less smoking. They then — again, standard protocol — told the press that this shows that e-cigarettes are causing more smoking.

My previous post includes my somewhat rambling analysis of why this is all wrong. After writing that, I accepted a request to write a tight version of to post to PubMed Commons (presumably that was from Clive Bates, who is a big PMC fan, though I do not recall exactly). What follows, between the section break lines, is what I posted. It relies on more technical background on the part of the reader than I usually assume here, so I have added a few new notes (in brackets) to explain a few points.

Credit to Zvi Herzig for saving a copy of it before it was deleted (more on that story below), else I would no longer have a good copy. I am a year late in properly crediting Zvi for that favor by posting this.

It is dismaying about the state of scientific thinking in this area that commentators on this paper have failed to recognize its glaring fatal flaw. This includes the journal editors and reviewers, but also the many critics of this paper, including the previous comment here and various blog and social media posts (example)

The paper’s analysis and conclusions are based entirely on the modeling assumption that U.S. teenage smoking prevalence would have declined linearly over the period 2004-2014, but for the introduction of e-cigarettes in the population. This assumption is made stronger still [i.e., more constraining and thus more speculative] by the additional assumptions that e-cigarettes introduced an inflection, with linear declines both before and after that introduction [i.e., that it, it consists of a line segments for each of the two periods, forced to meet at a corner, like a drinking straw with a kink folded into it], and that an estimate of whether their slopes are significantly different is a measure of the effect of e-cigarettes on smoking prevalence [i.e., the claim that because the bend in the straw is only slight, e-cigarette availability had no effect]. There is so much wrong with these assumptions that it is difficult to know where to start.

Probably the best place to start is the observation that the authors did not even attempt to justify or defend this body of assumptions. Given that the analysis is wholly dependent on them, this itself is a fatal flaw with the paper even if a reader could guess what that justification would have been. But it very difficult to even guess. There are very few eleven-year periods in the historical NYTS data where smoking rate trends are monotonic (even ignoring the noise from individual years’ measurements), let alone linear. [Note: you can see a graph of the wildly varying historical data in my original post, and it is immediately obvious that it is absurd it is to try to fit a line to it.]

While teenage smoking prevalence is not nearly as unstable as many other of their consumption choices, no choice in this population can be assumed to be in the quasi-equilibrium state of, say, average adult beef consumption, where the shape of a curve fit to the gradual changes is largely immaterial. Unlike a stable adult population, the teenage population is characterized by both substantial cohort replacement between data waves [i.e., it is not the same people — the older kids from one year are no longer kids two years later, but are replaced with a new group of young kids] and fashion trends (rapid changes in collective preferences). Unlike many consumption choices, smoking is characterized by strong social pressures that sometimes rapidly change preferences. Indeed, the authors of this paper are proponents of the belief that marketing and policy interventions substantially change teenage smoking uptake. This makes their modeling assumption that the only impactful change over the course of eleven years was the introduction of e-cigarettes patently disingenuous.

In addition to the shape of the trend line, there are numerous candidates for modeling the impact of e-cigarettes, such as a one-off change in the intercept of the fit line rather than the slope [i.e., as if the straw were snipped apart and the bit after 2009 was allowed to shift up or down, but had to keep the same slope], or making it a function of e-cigarette usage or trialing prevalence [i.e., instead of forcing a linear fit on the latter period which effectively assumes that any effect increases proportional to the mere passage of time, use a curve that models the impact as proportional (or some other function) to actual e-cigarette exposure]. It is telling that the results from none of these alternative models, which are at least as plausible as the one presented, are reported. This either means that the authors ran such models but did not like the results and so suppressed them, or that the authors never even bothered to test the sensitivity of their model.

Put simply, the analysis on this paper depends entirely on a set of assumptions that are never supported and, indeed, clearly unsupportable. While a convenient but clearly inaccurate modeling simplification may sometimes be justified for purposes of making a minor point, it is obviously a fatal flaw when it is the entire basis of a paper’s analysis and conclusions.

Despite this, critics of this paper have almost universally endorsed the authors’ unjustified assumptions rather than pointing out they are fatal flaws. In particular, they have focused on quibbling about the choice of inflection point. The authors chose 2009 as the zero-point for the introduction of e-cigarettes, while the critics have consistently argued that the next data wave (2011), when there was first measurable e-cigarette use, should have been used. They point out that the model then shows a substantially steeper linear decline for the latter half of the period, and suggest this is evidence that e-cigarettes accelerated the decline in smoking contrary to the original authors’ conclusions [i.e., if you use the original method but bend the straw at 2011 instead of 2009, the second half gets steeper after the inflection rather than staying about the same].

If the original model were defensible, one could indeed debate which of these parameterizations was more defensible (the advantage here seems to go to the original authors; generally one would choose a zero-point of the last year of approximately zero exposure, not the first year of substantially nonzero exposure). But the scientific approach is not to dispute parameterization specifics based on one’s political beliefs about e-cigarettes. It is to observe that if a genuinely debatable choice of parameters produces profoundly different model outputs, then the model cannot be a legitimate basis for drawing worldly conclusions. It is far too unstable. In other words, the critics offer an additional clear reason for dismissing the entire analysis, but rather than arguing this should be done, they endorse the core model.

I have written, using this paper as context, more about the tendency of some commentators to get tricked into endorsing underlying erroneous claims while ostensibly criticizing here.

As a final observation, a serious analyst doing a quick-and-dirty fit to a percentage prevalence trend this steep would not choose a line, which predicts there will soon be a departure from the possible range [i.e., a downward linear trend will cross into negative values, which is not possible for a population prevalence]. The standard choices would be a logistic curve or exponential decay. However, it is likely that non-scientist readers (the predominant audience of papers on this topic) will not recognize this. Such readers frequently see fit lines overlaid on time series graphs, and may not recognize that these are just conveniences to help reduce the distractions from the noise in the data, not testaments that every time series can be assumed to be linear. Had the authors chosen a more appropriate shape for their fit line, it would have called more readers’ attention to the fact that they were making unjustified assumptions.

The underlying issue here is much more important than one stupid paper. The reason that Dutra and Glantz could so easily get away with this is that epidemiology (and other associated social science, for those not inclined to call this epidemiology per se) is rife with strong modeling assumptions, which are seldom justified and often patently absurd. The reason several others wrote criticisms of the paper without ever identifying the fatal flaw (and, indeed, implicitly endorsing it) is that the problem is so pervasive that they do not even recognize it as a problem.

These modeling assumptions are sometimes so strong, as in the present case, that the analysis and results are purely artifacts of the assumptions. More often, they affect reported quantitative estimates in unknown ways, though almost certainly moving the result in the direction the authors would prefer. This problem is close to ubiquitous in epidemiology articles. I have written about this extensively, and I will summarize some of it in a Daily Vaper science lesson — riffing off of this material — shortly. So I will stop here.

Finally, there is the issue of PubMed Commons. For those not familiar, it is basically a third-party comments section which PubMed (which indexes the articles from most health science journals) attaches to the bottom of its page for each article. Anyone who has authored an article that is indexed by PubMed can comment there. For example, here is where the above comment used to appear. Yes, used to.

The story: After being asked by Clive (I am pretty sure) to write that, I posted it and shared the link. Zvi quickly wrote me a note suggesting I edit it. In the original version, the last paragraph had some bits about how the authors’ political biases obviously influenced their choice of which among the multitude of candidate models to use. Zvi pointed out that PMC has a rule against “speculation on the motives of authors”, and that they will remove a comment over that. This is a fair enough rule, though it is not actually enforced (every comment from tobacco controllers I recall ever seeing there was just attacking the authors for supposed conflict of interests due to corporate funding, which is speculation about motives and, moreover, an aggressive accusation that someone will lie about results in exchange for funding). As anyone familiar with Glantz et al. will know, my statement was hardly mere speculation. But it was not important, so I was ok with editing the comment to remove those bits.

A few days later, presumably after Glantz complained and pressured PMC about the comment, it was deleted. (There is no chance that one of the other commentators on the Glantz paper, who I also criticized, was the one who complained. They are decent and honest people who would have replied to the comment if they thought I got something wrong, rather than insecure propagandists who know they are wrong and so try to censor disagreement.)

I later got an official notice from PMC that I had violated some unspecified rule and would have to change that if I wanted the comment to appear. I replied to ask what in my comment had violated what rule, because I genuinely had no idea. That kinda seems like a reasonable question. But I got no reply. I followed up. I suggested that perhaps they were basing their assessment on the original version, that did violate their rule, and had missed the edit. I got no response to any query.

Now if I had speculated that Glantz was a sexual harasser who stole credit for his advisees’ work, I could understand why they would have objected. Apparently I would have been right, but it would not have been relevant to the paper. But the content of that comment was fully on-point. Even someone who could not understand why it was entirely valid could at least recognize that it was not some flight-of-fancy.

Clive really wanted some version of the comment to appear. He went so far as to take the time to do a rewrite that he guessed would pass the censors, though it is impossible to know since they refused to explain what bit of it they wanted changed. I could have posted that. Or I could have tried to just repost the above version again, based on my speculation that they were working from the original version that did violate their rule.

But you know what? Fuck that.

On top of everything else, the single message I got from PMC contained a rather rude threat to pull my credentials to comment there if I again violated those vague rules they refused to clarify. That would create a rather difficult situation for someone who wanted to continue to give them free content. Fortunately for me, I am not inclined to gift them any more.

PMC is a ghost town. There are very few comments, and their content is usually little more than a tweet’s worth. Based on what I have seen, the comments are of low quality on average, and are about equally likely to criticize something that is right or inconsequential as they are to identify obvious major flaws. Thus no one ever thinks, “I should check PMC for comments on this paper I am reading.”

PMC needs people to give them free content, and a lot more of it, before it has value. They need force authors to reply to comments, not to comply with takedown demands. So if they are going to blithely delete good content (I really doubt that they get, on any of their million pages, even one analysis per day of this quality), I am not really inclined to help them out. It is not like I or my loyal readers get any benefit from it appearing there, rather than just here or in The Daily Vaper.

During the course of my career, I have thought a lot about how to try to fix some of the problem of health science journals publishing so much obvious junk. Something like PMC, with its relatively large budget and the visibility that PubMed brings, is an approach with some potential. But if the plan is to crowdsource voluntary comments, then it needs to be free and open. Being rude and threatening to your voluntary contributors is definitely contraindicated.

A small operation that picks and chooses a very few targets, like a journal that encourages critical analysis or like Retraction Watch, can operate with tight command-and-control (if those in command are skilled and honest). But that cannot work for a system like PMC.

Trying to overlay command-and-control upon a crowdsourced system is hopeless. It is basically like Soviet planning, where someone tries to control a system (the market; the scientific debate) that is far too complicated to handle from the top down. But that is apparently what PMC is trying to do. They do not have the capacity to assess their content or, apparently, even to respond to requests for clarification. Instead, they just delete something without explanation if an author complains. Needless to say, this is an epic fail.

Moreover, given this arbitrary Soviet system, even someone who really really wanted to post to PMC would be a fool to write an analysis as detailed as mine, let alone the more involved analysis that would be required for a paper that is not so patently stupid. At least if you only write a tweet’s worth of content, you have not lost much if they delete it, and you stand some chance of guessing what to change if they make a vague demand that it be changed.

In summary: The way epidemiology models are created and accepted is a disaster, and the vast majority of the literature is suspect. This also specifically makes it very easy for intentional liars like Glantz and Dutra to concoct models to support their political positions. Not only do the academic and journal community do nothing to resist this, but they actively work to resist any effort to respond to it, out of fear of admitting just how much rot there is. PMC is just one example of the many projects that could theoretically be part of the solution, but that actually remains part of the problem, resisting real scientific criticism.

Oh, and also, we rely on this literature to make behavioral, medical, and public policy decisions. Have a nice day.


The academic scandal hiding within the Stanton Glantz sexual harassment scandal

by Carl V Phillips

By now you have probably heard about the lawsuit against Glantz by his former postdoc, Eunice Neeley. Buzzfeed broke the story here, which (like other reports) appears to be based entirely on the complaint filed in a California court (available here). There appear to be no public statements, other than the blanket denial that Glantz posted to his university blog, which was picked up in at least one press report.

I am fascinated by several details that were too subtle for the newspaper reporters.

First, Glantz’s post in itself — using a university resource — may be yet another transgression. He declares, “I… deny every claim reported to be included in this lawsuit.” He notes that Neeley made a complaint to the university earlier this year and that someone else (presumably another junior researcher who was a Glantz protege[*] or employee) filed another complaint. He describes some details of the university investigation and repeats his denial. He does so just before — get this! — he writes, “Under University of California policy I am not supposed to discuss this investigation until it has been concluded and I have and will continue to respect that policy.” We know Glantz is not the sharpest tool, but you might think he would notice that the previous sentences in the same short document were him discussing this investigation before it concluded.

[*Dear reporters: The counterpart word for “mentor” is “protege”, “advisee”, or “student”. There is no such word as “mentee”, except perhaps as a misspelling of a flavor descriptor. That construction would only work if “mentor” meant “someone who ments”. Seriously, weren’t most of you English majors?]

Also the court complaint says (at 26) that Glantz had already talked publicly about the complaint against him and “boasted” about his defense. So we have a rather undisciplined defendant here. Not a big surprise.

Caveat: I am obviously just working from the complaint document, which might contain inaccurate assertions, though it also might fail to mention damning information that will come out later. Feel free to sprinkle these through as necessary: alleged, alleged, alleged, alleged, if true, if true, if true, claimed, claimed, reported.

First, the headline complaints themselves, those about sexual harassment, were not on the level of recent scandals that included coerced sex acts, aggressive unwanted propositions and …um… displays, and criminal battery (I assume few would suggest the unwanted hugs by Glantz, which were not described being of an overtly sexual nature, are an example of battery). The sexual harassment complaints against Glantz are a pattern of boorish leering, a few instances of gratuitous and discomfiting sexual stories or analogies, the hugs, and some vague hearsay claims. That is basically it.

That is not to dismiss these, of course, let alone say they were acceptable. There is no queue, such that complaints of less heinous harassment need to wait their turn. The sooner we have lots of muscular #MeToo actions like this, the better for our society. The acts Glantz is accused of might still violate laws, university rules, or grant funders’ rules (I offer no opinion on whether they do). It is obvious why someone would dislike this experience and want to escape from it. Any decent employer, upon hearing of this, would order the supervisor to stop doing such things and respect the junior employee’s request for reassignment.

Still, the claims are about the vulgarity, ickiness, and insensitivity that women endure as a result of merely being in the same room as creeps, not assault or exploitation that was facilitated by the abuse of power. It is difficult to imagine these allegations (by themselves) taking out a senior professor. I have witnessed academic shops where bawdy talk by everyone was the norm and no one seemed to mind (though perhaps it is relevant that these were headed by women; a sensible male professor will stay out of any such conversations even if they are common banter among his advisees). If Glantz were sensible, he would have made this all go away with an apology and an “I did not realize… but from now on….” (Spoiler: Glantz is not sensible.)

The racism (if that is even the right word) complaints are weak tea. The lawyer writing the complaint made a big deal about racism issues (Neeley is black), for obvious reasons. But the specifics offered are merely: one story of what might have been (though it seems quite possible it was not) a stupid “you are black, so I just assumed you…”-type suggestion; an unsubstantiated assertion that papers by Neeley and another nonwhite postdoc were subject to more review and editing than their peers (which is pretty much impossible to measure, let alone to rule out a defense that their papers simply needed more editing); and a statement that seems to be Glantz telling another nonwhite researcher in the shop she was a diversity hire.

This is not to say that any of the above is ok. What Glantz said and did was stupid. Really really stupid, and rude and gross. It would be stupid in any setting, but an academic advisor has higher standards. If Glantz really did these things (and it is really difficult to fathom why someone would make these particular details), but somehow did not understand the harm they were causing, he should have — upon receiving the first complaint about them — tripped over himself apologizing to Neeley and the others, promising to stop but still offering to make it easy for her and others to transfer to other advisors, and then staying out of their lives if that is what they seemed to want, and issuing a solemn statement of concern and promises to everyone in his shop.

If he did not really commit the alleged acts, he should still have apologized for whatever disaster did inspire the allegations, and still helped Neeley find another mentor if that was what she wanted. If he demonstrated he is not decent enough to do one of these, the university should have pushed him to. Note that it is possible that some of this happened — the complaint does not deny there were such attempts to fix the problem, and it is not as if the plaintiff’s attorney is going to volunteer that there were.

Regardless of the seriousness of the original behavior and any hypothetical attempts to fix the problem, what happened next — much more than the salacious allegations from the headlines — is why Glantz must be forced to resign, and the university should be on the hook for a payout.

The complaint chronicles how Glantz, after he was no longer Neeley’s supervisor, demanded that he still have an authorship role (both credit and contribution) in the paper she had written under him. Indeed, what he was demanding was lead-author control over it. This was a whole additional level of stupid, as well as making the hypothetical scenario (that he or the university sincerely tried to fix this) seem extremely unlikely. This was not some landmark paper that represented a scientific innovation by Glantz that he would understandably not want to give away. It appears to be just another iteration of the silly conspiracy stories about the tobacco industry, the amateurish cherrypicking historiography, that Glantz and his minions have written at least a dozen times before. This perfectly demonstrates his clueless arrogance (I suspect that narcissism would be the technical diagnosis) when he did not cut his losses and just give the worthless paper away. No. One. Would. Care.

What is worse, he demanded to be a coauthor on her future papers. This is where the story in the complaint gets more subtly interesting. He was either demanding that he keep working closely with her and have major influence over her, despite what had happened, or he was basically insisting she commit academic fraud on his behalf: to make him an author of papers he did not substantially contribute to.

And apparently not content to keep the academic fraud vague and deniable, Glantz marched on.

The complaint says he reneged on a promise to let Neeley be the corresponding author for their paper, demanding that role for himself. Even if there had been no such promise, and even setting aside that he was no longer her advisor and was a fool to not be contrite and conceding, it was properly up to her. She wrote the paper, and a postdoc is plenty senior to make the decision and take whatever role she wants. In addition, Glantz insisted on adding one of the in-house commentators to the paper as an author for what the complaint implies (though is very sloppy about actually saying) are mere reviewer comments, not worthy of authorship.

[Aside: Fake authorship is a major problem in public health publishing. Because the actual value (or total lack thereof) of someone’s papers is pretty much ignored, only numbers count. An unearned authorship still adds to the numbers. I have known researchers who struck a deal to include each other as authors on every paper, despite a complete lack of involvement. I quite enjoy asking anyone who has “authored” hundreds of papers — often more than ten per year — if they have even read them all. (They have not.)]

Apparently the university backed Glantz’s insistence that Neeley still had to work with him on the first paper and yield to his decisions about it. Stupid, stupid, stupid. The complaint spins this as a method to manufacture excuses to deny Neeley authorship of the paper, given that she was strongly averse to having any dealings with Glantz anymore. And, sure enough, a bit later Glantz went ahead and submitted the paper under his name, without telling Neeley or including her as an author. The complaint also says that Neeley heard that Glantz was planning to steal another paper she had written or was writing.

This (assuming it is true) is bright-line academic fraud. It is the worst kind of plagiarism and an unforgivable violation of his duties. Whatever one thinks of the seriousness of the headline rude and gross behavior, and whatever one thinks of an academic not endeavoring to make amends for hurting his protege even if accidentally, there is no getting out of this. The university has to force him to retire (or better still, overtly dismiss him), and his funders have to pull their funding. Importantly, unlike most of the acts alleged in the complaint, Glantz’s submission of Neeley’s paper without her permission or name is easy to demonstrate using evidence beyond mere eyewitness testimony.

This action also reinforces the claims in the lawsuit that Glantz retaliated and the university abetted it. The cliche that the coverup is worse than the original crime is usually bullshit. But disturbingly often in sexual harassment cases, the abuse of accusers after they speak up — to cover up or just to punish them for standing up for themselves — seems to cause more harm than the original acts (not to imply the original acts are not harmful, obviously, nor to deny that sometimes they are extremely harmful).

Despite this, the #MeToo movement should hesitate to embrace Neeley as a poster victim. Boorishness and academic technicalities are not exactly the most tragic abuse stories we have heard recently. In addition, there is an intriguing cause of action stated in the lawsuit (at 71), about unjust enrichment. No details are explained, so maybe this is just a meaningless copy-paste of the lawyer’s standard boilerplate. But if not, the only apparent enrichment in sight is the $20 million that FDA gave Glantz to write stupid things about tobacco. Or, apparently, to hire people to write stupid things about tobacco that he puts his name on. If any of his actions were a violation of the terms of that contract (I have no idea) and if Neeley can get official federal whistleblower status (again, no idea) then she and her lawyer could collect millions if the feds sue University of California to pay back the grant money. It is better they get rich than the money be used to produce junk science, of course, but we are still not talking about selfless nobility.

[Update/Correction: The above paragraph incorrectly implies that the FDA grant is the only substantial funding Glantz has or had that is at risk for clawback or that might be subject to whistleblower awards. There are other big paychecks that also might. See the comments for more details.]

Or it could be quite the opposite, that Neeley is willing to expend her own resources just to retain control of her paper (as is demanded, as a matter of immediate relief, in the complaint), and all the demands for monetary compensation are just the lawyer looking to eke out his fee. We do not know at this point. However, we do know that there are no good guys in this story.

Neeley was, by choice, a Glantz acolyte and thus is presumably seeking a career producing junk science and riding the tobacco control gravy train. Everyone else in the story — the unnamed other victims and the senior researchers in Glantz’s shop who Neeley sought help from — were also in the business of producing anti-science that hurts a lot of people. Moreover, if(!) Glantz’s actions really were as long-standing and frequently-discussed as implied in the complaint, then the senior people are guilty of not speaking up about it to protect their proteges, and Neeley deserves moral credit as a whistleblower.

There is ultimately something very Praljak-like about this (the war criminal who recently poisoned himself, getting the last word, after being convicted in The Hague). If Glantz goes down for this — and it is hard to see how he does not unless there are transparent fabrications in the complaint — it lets him off way too easily. It will be because he was a clueless oaf, and did not know when to cut his losses. It will let him quietly skate away from the tremendous damage his junk science has caused.

If he gets deposed, he will just be asked about roving eyes and authorship credit. He will not be asked about how many different data and model combinations he ran, and then hid, before finding the one that created the “Helena miracle” illusion. Nor will he be asked about how many people have explained to him the glaring flaws in his e-cigarette analyses — confusing liquids and solids, ignoring obvious confounding, combining incommensurate results, etc. — and how many times he further repeated the same misinformation anyway.

As with Praljak’s suicide, it is a bit closer to justice than him just getting hit by a bus, but it is not enough. Still whatever puts and end to such a career — whether harassment and plagiarism charges, a human rights trial, suicide, or a bus — stops him from causing more harm, and that is what matters most.

Gateway effect denial claims are a target-rich environment

by Carl V Phillips

I have repeatedly tried to explain what gateway claims mean — that engaging in one behavior causes another behavior (in the present context, that vaping causes smoking), what they do not mean, and what constitutes valid evidence for or against their existence. Why bother? As I noted in a recent piece on the topic, for The Daily Vaper (which links back to some of my more in-depth pieces), “most of the claims by vaping proponents that there is no gateway effect are also nonsense.” Indeed, they create such a target-rich environment for criticism that even Simon Chapman can find the flaws. Continue reading

Index of my Daily Vaper articles (2)

by Carl V Phillips

In my last post, I noted that most of my writing is currently at The Daily Vaper. I also promised that I would keep an index of those publications here for those who following this and not those, highlighting the ones that fans of this blog might be particularly interested in. This also provides an option for commenting on them, which DV does not have, and a chance for me to add a bit more about some of them.

Here is my belated second entry in the series. I will try to do this more frequently so the list is not so long (sorry — maybe keep this tab open and take a few days to get to all of them you want to see).

In rough descending order of what I think regular readers of this blog would find most interesting (I expect you will want to read at least the first seven):

1. I wrote a science lesson about anchoring bias and why it means that we should really stop describing the risks from low-risk tobacco products in relation to smoking (e.g., “99% less harmful”). I have hinted at some of this here, but I have never really nailed it before. This is new analysis. Anyone interested in my evolving thinking about accurate messaging — based on more years of experience and thought than anyone else involved in this realm — should definitely read this.

2. I reported on a court ruling that is fairly obscure, but truly delightful: The usual gang of anti-tobacco groups petitioned to be co-defendants, alongside FDA, in a suit against FDA by cigar manufacturers over aspects of the deeming regulation. The judge denied it. Why should we care? Because the ruling basically said that they are not stakeholders. For those of us constantly frustrated by the bullshit suggestion that they are (let alone that the primary stakeholders, tobacco product consumers, are not), this is just too good. Unfortunately I suspect I am the only one who will try to make anything of this. I strongly encourage those of you who are involved in advocacy (and especially those involved in lawsuits) to run with it. It really is a huge potential lever.

3. This piece is about the unethical scientific practices of tobacco controllers, specifically their flouting of human subjects protection rules. These are bright-line violations of codified rules, unlike the usual unethical behavior of tobacco control which is evil but not unlawful. I mention a couple of the reports about that which I have written here along with some new material. I suggest that perhaps a blanket boycott campaign would make sense. If I have time, I will write a piece about that specifically for this blog.

4. My personal favorite is this one, where I catch FDA chief Scott Gottlieb, in congressional testimony, offering basically the NIDA definition of “addiction”, a definition that clearly excludes tobacco products (including cigarettes). As my readers know, I have made a study of what people mean when they say “addiction”, and how there is apparently no viable definition that anyone wants to defend that actually includes tobacco/nicotine. In this case, Gottlieb was stuck because he had to talk about opioid addiction, and so was forced effectively undercut all the CTP rhetoric on the subject. Well, undercut it if anyone decides to challenge them based on this, which probably will not happen. The industry is not exactly known for being that clever.

(Foreshadowing note: I actually think I have figured out how to characterize what people mean when they say smoking etc. are addictive. One of these days I hope to write a major piece on this.)

5. In what could basically be an uncharacteristically terse version of a post here, I wrote about a recent “what parents need to know” statement about vaping in JAMA Pediatrics. It was all the terrible you might expect. I shredded it. If your appreciation for shredding exceeds your inclination to be annoyed by the terrible, you should find it a satisfying little read.

6. I reported how, after Senator Chuck Schumer launched an amazingly stupid attack on vaping in a press conference, Gottlieb practically fellated him on Twitter. (No, I did not put it quite that way.) This suggests that despite all the overly-optimistic talk of regime change at FDA, nothing has really changed in terms of who they consider their political patrons. (*cough* *told you so* *cough*)

5. My most recent piece was about the FDA’s Orwellian-named “Real Cost” campaign. I noted that they are about to aim this anti-scientific propaganda campaign, currently focused on smokeless tobacco and smoking, at vaping. I recount some of the campaign’s content and assess what they will do regarding vaping. I will write more about this shortly.

6. I did some original data analysis in this article, based on a recent CDC report of vaping rates across demographics and occupations. The authors could not see past the raw vaping rates. This is merely an uninteresting echo of the smoking rates; whoever smokes most is going to vape most. I looked at the ratio of vapers to smokers, which is actually useful. I found that across almost every group, the ratio is very close to the overall average. This effectively shows that the rate of switching from smoking to vaping is about the same across the different groups. The one big exception was African-Americans, where the ratio is much lower. That is, few smokers in that population are switching to vaping. I suppose this is worth a journal article, but I do not have time. (Free easy publication for any student or academic who wants to take the lead and write that with me! Seriously, let me know if you are interested.)

7. In this brief piece, I review the results of a recent paper that shows the anti-vaping bias in mainstream media reporting. It just confirms what we all know, but does a nice job of it. Most notably, it observes that anti-vaping statements tend to be attributed to people with supposedly expert credentials (though obviously they are really either clueless or liars), while the truth is attributed to advocates who the average reader will (mistakenly) consider less credible.

8. Here I analyzed a research paper out of FDA which is part of their assessment of how MRTP labeling might affect consumers. Unsurprisingly, it seems designed to make the case that allowing manufactures should not be allowed to tell the truth about their products. The study was bad, and they clearly intend to spin it worse.

9. I reported on FDA’s release of “adverse event” type records that they collect for tobacco products, which are really mostly about vapor products. I noted it is pretty much meaningless, but that it might be used in anti-vaping advocacy. I indicated my suspicion that FDA released it for just that purpose, not because of some belief it should be available because it is genuinely informative (it is not).

10. In this science lesson, I summarized my analysis that shows that the optimal tax rate for low-risk tobacco products is zero if the goal is to promote population health, or any other defensible stated goal. Not “lower than the tax on cigarettes” or “proportional to the risk”, but zero. My readers will already be familiar with these arguments, though if you are looking for a short summary, this is it.

11. I tried to assess the recent FDA “guidance” about the ban on free samples of vapor products (e.g., sampling e-liquid flavors), now that they are deemed as tobacco products, with all the rules that apply to them. I say “tried” because the guidance sort of says that it does not apply to adults-only venues (e.g., most vape shops). But how exactly this will play out — i.e., will flavor sampling be banned? — is not clear.

12. I analyzed a recent survey by BAT about beliefs about the risks from vaping. It is pretty straightforward “latest study” reporting, though I got some additional data from the researchers that allowed me to offer a better assessment than those who were just working from the press release. The main takeaway is that even in the UK, a ridiculously large portion of the population does not understand that vaping is much less harmful than smoking.

13. I introduced readers to the CASAA Testimonials collection that I created in 2013. Long-time readers of this blog will be familiar with it. I plan to publish more little articles that are excerpts from that collection.

14. Finally, I reported on the fight over vapor product taxes in Pennsylvania. The upshot is that tax structure, not just tax rates, matter a lot. A rather more interesting aspect of that story — and of another story that got spiked — does not appear there. I hope to get time to report it here sometime (ooh, more foreshadowing).

(Damn, that is a lot of material. Comments welcome. I suggest using the serial numbers if you are commenting on one in particular.)

Note to readers: look for me at @TheDaily_Vaper

by Carl V Phillips

Those of you who follow this blog but do not follow me on social media may be unaware that I am now writing for The Daily Vaper, a unit of The Daily Caller newspaper. Since the time and energy I spend doing that is basically the same time and energy I spent on this blog, there may not be a lot here. Not none, but a lot less.

I will try to do a periodic post here to index my Daily Vaper articles, particularly calling attention to the ones that would have been good fits for this blog. Continue reading

Sunday Science Lesson: Debunking the claim that only 16,000 smokers switched to vaping (England, 2014)

by Carl V Phillips

When this journal letter (i.e., short paper), “Estimating the population impact of e-cigarettes on smoking cessation in England” by Robert West, Lion Shahab, and Jamie Brown came out last year, most of us said “wait, wot?” The authors estimated that in 2014, about 16,000 English smokers became ex-smokers because of e-cigarettes (a secondary analysis offered 22,000 as an alternative estimate). But that year saw an increase of about 160,000 ex-smokers who were vapers in the UK (the year-over-year increase for 2015 versus 2014) according to official statistics. In addition, there were about 170,000 more ex-smokers who identified as former vapers. Since the latter number subtracts from the number of ex-smokers who are vapers in 2015 they need to be added back. So it appears that the year-over-year increase in English ever-vapers among ex-smokers appears to be nearly 200,000, after roughly adjusting for the different populations (England is 80% of the UK population). Thus West et al. are claiming, in effect, that the vast majority of people who went from smoking to vaping did not quit smoking because of vaping.

My calculation is rough, and for several reasons it may be a bit high (e.g., the measured points in 2015 and 2014 demarcate a year that falls slightly later in calendar time than 2014 itself, and the rate of vaping initiation was increasing over time). But we are still talking about well over 100,000 new ex-smoker vapers. Probably closer to 200,000. So this would mean that about 90% of new ex-smoker vapers either would have quit smoking that year even without vaping, had quit tobacco entirely and only later took up vaping, or are not “real quitters” (i.e., they were destined to start smoking again before they would “count” as having quit, which is not a well-defined definition, but the authors seem to use one year as the cutoff). This seems rather implausible, to say the least.

This is an extraordinary claim on its face given what we know about the advantages of quitting by switching, and more so given that more detailed surveys of vapers (example) show almost all respondents believe they would still be smoking had they not found e-cigarettes. It must be noted that most respondents to those surveys are self-selected vaping enthusiasts who differ from the average new vaper, and that a few of them might be wrong and would have quit anyway. But the disconnect is still far too great for West’s weak analysis (really, assumptions) to come close to explaining.

I never bothered to comment on the paper at the time it came out because the methodology was so weak and the result so implausible that I did not think anyone would take it seriously. But the tobacco wars seldom meet a bit of junk science they do not like. In this case, Clive Bates asked me to examine the claim (and contributed some suggestions on this analysis and post) because some tobacco controllers have taken to saying “e-cigarettes only caused only 16,000 people to quit smoking in England! so we should just prohibit people from using them!”

The proper responses to this absurd assessment and demand, in order of importance, are:

  1. It would not matter if they caused no one to quit smoking. It is a violation of the most fundamental human rights to use police powers to prohibit people from vaping if they want to. People have a right to decide what to do with their bodies. Moreover, in this particular case, you cannot even make the usual drug war claims that users of the product are driven out of their minds and do not understand the risks and the horrible path they will be drawn down: Vaping is approximately harmless, most people overestimate the risks, and it leads to no horrible path. It is outlandish — frankly, evil — to presume unto oneself the authority to deny people this choice.
  2. But even if you do not care about human rights and only care about health outcomes or whatever “public health” people claim to care about, causing a “mere” 16,000 English smokers to quit, annually,) is quite the accomplishment. There is no plausible basis for claiming any recent tobacco control policy has done as much. Since there is no measurable downside, this is still a positive. Also, the rate of switching probably could be increased further with sensible policies and truthful communication of relative risks.
  3. The rough back-of-the-envelope approach used in the paper could never provide a precise point estimate even if the inputs were optimally chosen. But the inputs were not well chosen. The analysis included errors that led to a clear underestimate. When a back-of-the-envelope result contradicts a reality check, we should assume that reality got it right.

So I am taking up here what is really a tertiary point.

Back of the envelope calculations

West et al. carried out a back-of-the-envelope calculation, a simple calculation based on convenient approximations that is intended to produce a quick rough estimate. It happens to have glaring errors, but I will come back to those. Crude back-of-the-envelope calculations have real value policy analysis. I taught students this for years. In my experience, when there is a “debate” about the comparative costs and benefits of a policy proposal, at least half the time a quick simple calculation show that one is greater than the other by an order of magnitude. The simple estimate can illustrate that the debate is purely a result of hidden agendas or profound ignorance, and also eliminate the waste of unnecessary efforts to make precise calculations.

When doing such an analysis, it is ideal if you get the same result even if you make every possible error as “conservative” as is plausible (i.e., in the direction that favors the losing side of the comparison). West’s analysis would thus be useful if it were presented as follows: “Some people suggest that the health cost from vaping experienced by new vapers outweighs the reduction in the health cost from smoking cessation that vaping causes. Even if we assume that vaping is 3% as harmful as smoking, the total health risk of additional vapers (the annual increase) would be the order of equivalent of the risk for about 5000 smokers. Our extremely conservative calculation yields in the order of 20,000 smokers quitting as a result of vaping. So even with extreme assumptions, the net health effect is clearly positive.”

But the authors did not claim to be offering an extremely conservative underestimate for purposes of doing such a calculation. They implicitly claimed to be providing a viable point estimate. And that requires a more robust analysis rather than rough-cuts, and best point estimates rather than worst-case scenarios. It also requires a reality check about what would have to be true if the ultimate estimate were true, namely that almost everyone who switched from smoking to vaping did not stop smoking because of vaping.

West’s estimation based on self-identified quit attempts

The crux of their calculation is the following: Their surveys estimate that 900,000 smokers self-identify as having attempted to quit smoking using e-cigarettes (please read this and similar statistics with an implicit “in this population, during this period” and I will stop interjecting it). They then assume that 2.5% of them actually did quit smoking because of e-cigarettes.

Where does the 2.5% come from? It is cited to, and seems to be based mainly on, the results of the clinical trials where some smokers were assigned to try a particular regimen of e-cigarettes; the 2.5% is an estimate of the rate at which they quit smoking above those assigned to a different protocol.

Before addressing the problems with using trial results, the second paper they cite as a basis for the 2.5% figure is one by their research group. How they got from that paper’s results to 2.5% is unfathomable. That paper was a retrospective study of people who had tried to quit smoking using various methods and found that those reporting using e-cigarettes were successful about 20% of the time, which beat out the two alternatives (unaided and NRT) by 5 and 10 percentage points. If they had used ~20% instead of ~2% their final result would have been up in the range that would have passed the reality check. So what were they thinking?

I cannot be certain, but am pretty sure. It appears they only looked at differences in cessation rates and not the absolute rates, so the 5 or 10 rather than the full 20. Several things they wrote make it clear this is how they were thinking. This is one of several fatal flaws in their analysis. There are two main pathways via which e-cigarettes can cause someone to quit smoking (which means it would not have happened without them): E-cigarette use can cause a quit attempt to be successful when that same quit attempt would not have otherwise been successful, or it can cause a quit attempt (ultimately successful) that would not have otherwise happened. West et al. are pretty clearly assuming that the second of these never happens. I am guessing that the authors did not even understand they were making a huge — and clearly incorrect — assumption here.

Causing quit attempts is a large portion of cases where e-cigarettes caused smoking cessation. Indeed in my CASAA survey of vapers (not representative of all vapers, but a starting point), 11% of the respondents were “accidental quitters”, smokers who were not even actively pursuing smoking cessation, but who tried e-cigarettes and were so enamoured that they switched anyway. Add to these the smokers who had vague intentions of quitting but only made a concerted effort thanks to e-cigarettes and probably about half of all quit attempts using e-cigarettes do not replace a quit attempt using another method. So if half the 900,000 made the quit attempt because of e-cigarettes and 20% succeeded, we have, right there, a number that is consistent with the reality check I proposed.

Of course they did not use that 20%, and it does seem too high. What they did was assume that 5% would have succeeded in an unaided quit attempt without e-cigarettes — and all the same people would have made that attempt — and so 7.5% (5%+2.5%) actually succeeded when using e-cigarettes. But if half never would have made that attempt then a full 7.5% of them should be counted as being caused to quit by e-cigarettes, which more than doubles the final result (“more than” because their final subtraction, below, would not double but should actually be reduced).

As for why they did not use that 20%, I suspect (though they do not say) that when looking at the numbers from that paper, West et al. focused not only on the differences (the error I just discussed) but on the “adjusted” rates of how much more effective e-cigarettes were than the other methods, which were considerably lower than the numbers I quoted from the paper above. This too is an error. Public health researchers think of “adjusting” (attempting to control for confounding) as something you just do, a magical ritual that always makes your result better. This perception is false for many reasons, but a particularly glaring one in this case: The adjusted number is basically the measure of how helpful e-cigarettes would have been, on average, if those who tried to switch to them had the same demographics as smokers using other cessation methods. Smokers who try to switch to e-cigarettes have demographics that predict they are more likely to succeed in switching than the average smoker. Of course they do! People know themselves (a fact that seems to elude public health researchers). The ones who tried switching were who they were; they were not a random cross-section of smokers. So it seems that West et al. effectively said “pretend that instead of self-selecting for greater average success, those who tried to switch were chosen at random, and instead of using the success rate for the people who actually made that choice, we will use instead the number that would have been true if they were random.”

[Caveat: The attempt to control for confounding could also correct for the switchers having characteristics that make them more likely to succeed in quitting no matter what method they tried. So some of the “adjustment” is valid — but only for those who would have tried anyway — but much of it is not.]

Clinical trials

That last point relates closely to the other “evidence” that was cited as a basis for that 2.5% figure, and appears to have dominated it: the clinical trials.

Clinical trials of smoking cessation are useless for measuring real-world effects of particular strategies when they are chosen by free-living people. At best they measure the effects of clinical interventions. But in this case, these rigid protocols are not even a good measure of the effect of real-world clinical interventions in which smoking cessation counselors try to most effectively promote e-cigarettes by meeting people where they are and making adjustments for each individual. I have previously discussed this extensively.

A common criticism that the trials directed subjects toward relatively low-quality e-cigarettes. That is one problem. More important, the trials and did not mimic the social support that would come from, say, a friend who quit smoking using e-cigarettes and is offering advice and guidance. The inflexibility of trials does not resemble the real-world process of trying, learning, improving, asking, and optimizing that real-world decision entail. Clinical trials are designed to measure biological effects (and even then they have problems), not complex consumer choices.

But it is actually even worse than that. A common failing in epidemiology is not having a clue about what survey respondents really mean when they answer questions. There is no validation step in surveys where pilot subjects are given an open-ended debriefing of how they interpreted a question and what they really meant by their answer. (I always do that with my surveys, but I am rather unusual.) So consider what a negative response to “tried to quit smoking with e-cigarettes” really means. If a friend shoved an e-cigarette into a smoker’s hand and said “you should try this”, but she refused to even try it, she would undoubtedly not say she tried to quit smoking with e-cigarettes. But in a clinical trial, if that were her assignment, she would be counted among those who used e-cigarettes to try quitting, thus pulling down the success rate.

If she tried the e-cigarette that was thrust at her, but did not find it promising, chances are that in a survey she would probably not say she tried quitting using e-cigarettes. (She might, but given the lack of any reporting about piloting and validation of these survey instruments, we can only guess how likely that is.) If she passed that first hurdle, of not rejecting e-cigarettes straightaway, but used them sometimes for a few days or weeks, she might or might not say she tried quitting using e-cigarettes. But if she actually quit using e-cigarettes, she would undoubtedly count herself among those who tried to quit using e-cigarettes. I trust you see the problem.

It is the same problem that is common in epidemiology when you read, say, that 20% of the people who got a particular infection died from it. This usually means that 20% of the people who got sick enough from it to present for medical care and get diagnosed died, but countless others had mild or even asymptomatic infections. Everyone in the numerator (died in this case, quit in the case of e-cigarettes) is counted but an unknown and probably very large portion of those in the denominator (got the infection, were encouraged to try an e-cigarette) are not. Clinical trial results are (at best) analogous to the percentage you would get if did antibody tests in the population to really identify who got the infection. This turns out to be the right way to measure the percentage of infected who die. But then if you the applied that percentage to the portion who presented for medical treatment, you would be underestimating the number of them who would die. That is basically what West et al. did. Their 900,000 are those for whom e-cigarettes seemed promising enough to be worth seriously trying as an alternative, but they applied a rate of success that was (again, at best) a measure of the effect on everyone, including those who did not consider them promising enough to try.

This would be a fatal flaw in West’s approach even if the trials represented optimal e-cigarette interventions, providing many options among optimal products, and the hand-holding that would be offered by a knowledgeable friend, vape shop, or a genuine smoking cessation counseling efforts. They did not, and so underestimated even what they might have been able to measure.

Final step

As a final step, West et al’s approach debits e-cigarettes with an estimated decrease in the use of other smoking cessation methods caused by those who tried e-cigarettes instead. These are the methods that are believed to further increase the cessation rate above the unaided quitting that West debited across the board (the major error discussed above). We can set aside deeper points about whether estimates of the effects of these methods, created almost entirely by people whose careers are devoted to encouraging these methods, are worth anything. West et al. assume that those methods would have had average effectiveness had they been tried by those who instead chose vaping. They also still assume that every switching attempt would have been replaced by another quit attempt in the absence of e-cigarettes, as discussed above. This lowers their estimate from 22,000 to the 16,000. But a large portion of smokers who quit using e-cigarettes do so after trying many or all of those other methods, often repeatedly. Assuming those methods would have often miraculously been successful if tried one more time makes little sense.

As a related point that further illustrates the problems with their previous steps, recall that the 2.5% is their smoking cessation rate in excess of that of those who tried unaided quitting or some equivalently effective protocol. But it seems very likely that the average smoker who tries to switch to e-cigarettes has already had worse success with that other protocol than has the average volunteer for a cessation trial. This is the “I tried everything else, but then I discovered vaping” story. I am aware of no good estimate for this disparity, but if the average smoker who tried to switch were merely 1 percentage point less likely than average to succeed with the other protocol (e.g., because she already knew that it did not work for her), then the multiplier should have been 3.5% (7.5%-4% rather than 7.5%-5%). This is trivial compared to the error of using the incredibly low estimated success rate suggested by the trials in the first place, of course, but that little difference alone would have increased West’s estimate by 40%. This illustrates just how unstable and dependent on hidden assumptions that estimate is, even apart from the major errors.

Returning to the reality check

But lest we get lost in the details, the crux is still that West implicitly concluded that the vast majority of those who switched from smoking to vaping did not quit smoking because of vaping. The authors never reflect on how that could possibly be the case. They do, however, offer an alternative analysis, in what are effectively the footnotes, that gives the illusion of responding to this problem without actually doing so. They write:

The figure of approximately 16,000–22,000 is much lower than the population estimates of e-cigarette users who have stopped smoking (approximately 560,000 in England at the last count, according to the Smoking Toolkit Study). However, the reason for this can be understood from the following….

What follows is even weirder than their main analysis.

West’s “alternative” analysis

They actually start with that 560,000. That is inexplicable since it is possible to estimate the year-over-year change in 2014, as I did, rather than working with the cumulative figure. The 560,000 turns out to be well under half what you get if you add the current vapers and ex-vapers among ex-smokers from the statistics I cite above. So their number already incorporates some unexplained discounting from what appears to be the cumulative number. But since I am baffled by this disconnect, I will just leave this sitting here and proceed to look at what they did with that number.

As far as I can understand from their rather confusing description of their methods here, their first step is to eliminate those who were already vaping by 2014, and thus did not switch in 2014. That makes sense, though it would have been easier to just start with that. When they do this, they leave themselves with 308,000. So they started with something much lower than what you get from the statistics I looked at, and ended up with something that is half-again higher than the rough estimate from those statistics. Um, ok — just going to leave that here too. But the higher starting figure makes it even more difficult for them to explain away the reality check.

Their next step is the only one that seems valid. They estimate that 9% of ex-smokers who became vapers did so sometime after they had already completely quitting smoking, and subtract them. This is plausible. An ex-smoker who is dedicated to never smoking again still might see the appeal of consuming nicotine in a low-risk and smoking-like manner again. (Note that this should be counted as yet another benefit of e-cigarettes, giving those individuals a choice that makes them better off, even though the “public health” types would count it as a cost because they are not being proper suffering abstinents. It might even stop them from returning to smoking.)

Of course, this only makes a small dent. So where does everyone else go? Most of them go here:

It has to be assumed on the basis of the evidence [6, 7] that only a third of e-cigarette users who stopped smoking would not have succeeded had they used no cessation aid

…and here:

It is assumed that, as with other smoking cessation aids, 70% of those recent ex-smokers who use e-cigarettes will relapse to smoking in the long term [11]

This takes them down to 28,000.

Taking the latter 70% first, any limitations in relying on a single source for this estimate (another West paper) are overshadowed by: (a) There is no reason to assume switching to vaping will work as poorly, by this measure, as the over-promising and under-delivering “approved” aids that fail because they do not actually change people’s preferences as promised. Indeed, there is overwhelming evidence to the contrary. (b) Many of those in the population defined by “started vaping that year and were an ex-smoker as of the end of the year” have already experienced a lot of the “long term”. That is, if we simplify to the year being exactly calendar 2014, some people joined that population in December, and thus a (correct, undoubtedly much lower than 70%) estimate of the discounting between “smoking abstinent for a week or two thanks to e-cigarettes” and “abstinent at a year” (a typical measure for “really quitting” as noted above) is appropriate. But some joined the population in January and are already nearly at the long term. On average, they will have been ex-smokers for about six months, and being abstinent at six months is much better predictor of the long run than the statistic they used (which, again, is wrong to apply to vaping). Combining (a) and (b) makes it clear that this is a terrible estimate.

As for the first of those major reductions, references 6 and 7 do not actually provide any reason that “only a third…has to be assumed”. Those are the same references they cite for the 2.5% above. So this is just a reprise of the 2.5% claim, and suffers from the same errors I cited above.

You see what they did there, right? The reality check I offered is “your results imply that 90% of new ex-smoker vapers did not quit because of vaping; can you explain that?” Either anticipating this damning criticism or by accident, they provided their answer: “Yes, we assume — based on nothing that remotely supports the assumption — that 70% of them would have quit anyway (and 9% were already ex-smokers, and some other bits).”

This step basically sneaks in the same fatal assumptions from their original calculation but is presented as if it offers an independent triangulation that responds to the criticism that their original calculation has implausible implications. Here is a pretty good analogy: Someone measures a length with a ruler that is calibrated wrong by a factor of ten. They are confronted with the fact that a quick glance shows that their result is obviously wrong. So they make a copy of their ruler and “validate” their results with an “alternative” estimation method.

Oh, and at the end of this they knock off another 6000 based using what appears to be double counting, but at this point who really cares?


Their first version of the estimate is driven mainly by their assumption that attempting to switch to vaping is close to useless for helping someone quit smoking compared to unaided quitting, and also that all those who attempted to switch would have tried unaided quitting in the absence of e-cigarettes. There are also other errors. Their second version is based on the “reasoning” that because we have assumed that attempting to switch to vaping is close to useless, it must be that most of those who we have observed actually did switch to vaping must have not really quit smoking because of vaping — and so (surprise!) approximately the same low estimate.

So nowhere do they actually ever address the reality check question:

Seriously? You are claiming that almost everyone who ventured into one of those weird vape shops, who spent hundreds of pounds on e-cigarettes, who endured the learning curve for vaping, who ignored the social pressure to just quit entirely, and who decided to keep putting up with the limitations and scorn they faced as a smoker and would still face as a vaper, that almost all of them were someone who was going to just quit anyway? You are really claiming that almost all of them said, “You know, I think I will just quit buying fags this week — oh, wait, you mean I instead could go to the trouble to learn a new way of quasi-smoking and spend a bunch of money on new stuff and keep doing what I am doing it even though I am really over it and ready to just drop it? Where do I sign up?” Seriously?

Reality. Check. (And mate.)

For what it is worth, if you asked me to do a back-of-the-envelope estimate for this, I would probably go with something like the following:

There were about 200,000 new vaping ex-smokers. It seems conservative to assume that about half of them quit smoking due to vaping. 100,000. Done.

That is obviously very rough, and the key step is just an educated guess. But an expert educated guess is often far better than fake precision based on obviously absurd numbers that just happen to have appeared in a journal (as a measure of something — in this case, not even the same thing). In this case, it has far better face validity than West et al.’s tortured machinations.

[Update, 4 Oct:

Since this was posted, two other flaws in the West analysis have become apparent. The first come from my Daily Vaper article which was based on the lessons from this, a terse presentation in the many ways in which vaping causes smoking cessation. That is worth reading in its own right if you are interested in this stuff. What occurred to me when writing that was that I was too charitable in just saying “ok fine” about the dropping of all ex-smokers who had become vapers after already quitting smoking. For some of them, taking up vaping caused them to not return to smoking. So a few of them should actually be counted. (One might make the semantic argument that the claim is about how many were caused to quit, not how many were caused to be (i.e., become or remain) ex-smokers, so they really do not count. But it is still worth mentioning.)

The second flaw came up in the comments, thanks to Geoff Vann. He figured out an internal inconsistency in the West approach. Basically, if their base methodology (assumptions, etc.) is applied to their step that removed the established vaping ex-smokers from that 560,000, it turns out that you cannot remove nearly as many as they do remove. You can see the details in the comment thread. Internal inconsistencies are always interesting because even if someone denies the criticisms from external knowledge and analysis — which are really far more damning — they cannot complain about being held to their own rules!


What is Tobacco Harm Reduction?

by Carl V Phillips

In response to a couple of recent requests and my schooling of FDA in a recent Twitter thread, it seems time for me to again write a primer on the meaning of tobacco harm reduction (THR). Rather than return to a previous version I have written, I am doing this from scratch. This seems best given the evolution of my thinking and changing circumstances.

The key phrase, of course, is “harm reduction”, with “tobacco” denoting the particular area it is applied to. This is important: THR is not a concept that stands apart from HR. It means “the principles of harm reduction, applied to the use of tobacco and nicotine products, and other products that tend to get lumped in with them” (see my previous post for an explanation of that last bit and some other useful background about the current politics). Indeed, when my university research and education group was trying to decide on a name and URL in 2005, it was far from obvious that this was the right term, and we considered others (e.g., “nicotine harm reduction”). While the first prominent use of “THR” appeared in 2001, it was far from established as a common term. (There is probably some endogeneity here, of course — if we had chosen a different term, that might have ascended instead.) In any case, the key to answering “what is THR” is asking “what is HR” rather than thinking it is something different. Continue reading

The War on Nicotine begins

by Carl V Phillips

It has become a habit of many e-cigarette defenders to refer to recent chapters of the War on Tobacco as a war on nicotine, in part because they do not like their favored product being called a tobacco product. As for that motivation, yawn, whatever. But as for the statement, it was simply wrong.

The war on smoking in the USA morphed into a war on tobacco, which basically meant lumping in approximately-harmless smokeless tobacco with the not unreasonable original target of the war. This pretty much tracked the tobacco control industry’s professionalization (read: it went from being a noble — though obviously not universally embraced — hard-fought political cause to a venal business that had a license to print money and was constantly seeking new streams of revenue). Elsewhere in the world, the war was expanded to include Scandinavian smokeless tobacco as well as South Asian and other dip/chew products. Thus it was that for most of recent memory, the War on “Tobacco” was a ridiculously wealthy cabal of a few thousand people (with millions of useful idiots, of course) gunning for consumers and producers of smoked tobacco (tobacco, harmful), Western smokeless tobacco (tobacco, approximately harmless), and other oral products (not tobacco, often harmful).

When e-cigarettes finally became a major commercial product, after a remarkably long delay (which is, of course, a very interesting story, but not the present story), the Tobacco Warriors chose to add them to the list of targets. Thus the war became still more gerrymandered to include e-cigarettes. It was still a fairly well-defined single war, defined in much the same way that World War II was a war, despite really being two largely separate major wars and a few dozen border wars, tribal wars, and colonial struggles.  The war is and was defined in terms of what a particular faction did: it was the Anglophone major powers, plus whoever happened to fight one of the same enemies for whatever reason, versus everyone they fought.

As with WWII, the current enemies of tobacco control (which, interestingly, can also be defined largely in terms of government action by the Anglophone major powers) are increasingly not allies of one another. Perhaps that is a tactical error, but they (we) do have rather conflicting interests. But the Tobacco Warriors themselves — a fairly tightly-knit group of agencies, sock puppets, and funders, working together and maintaining remarkable party discipline — make it a war. They also draw boundaries around it: Despite this being fought like every other awful war on drugs, the people involved barely overlap with the traditional Drug War cabals (and, indeed, often actively oppose them despite looking just like them, but that is yet another story).

You can muse about whether there is a better name than “the War on Tobacco” for whatever this is. But one candidate name that was clearly wrong was “War on Nicotine”. For one thing, not all of the targets of the war even contained nicotine. But more important, nicotine in isolation was their thing. It was their peeps who praised, touted, and sold nicotine in its “proper” medicinal form (never mind that NRT is primarily used for the same purpose as the products that are the main targets of their war). One of their favorite go-to tropes was still that cigarette companies’ introduction of lower-nicotine products in the 1970s was some evil plot.

And then something very strange happened. Very strange. Over the last few months, the U.S. FDA suddenly embraced a long-discredited anti-nicotine policy proposal. They announced a policy goal of forcing cigarette manufactures to lower the nicotine content of their products. (Well, legal cigarette manufacturers. The black market would inevitably replace the banned current products — one of the many reasons why this proposal is long-discredited.) Part of this has been an unending blast of government-sponsored anti-nicotine propaganda. The propaganda asserts — without any evidentiary or serious theoretical basis, needless to say — that forced nicotine reductions in cigarettes is the silver bullet that will “keep all new generations from becoming addicted blah blah blah”.

(Aside: I cannot overstate the strangeness and suddenness of this policy. Basically the only people who still supported this zombie idea were those who stood to profit from it. And then suddenly it was at the center of — indeed, is basically the entirety of — FDA’s tobacco policy. I strongly encourage someone who has the time and platform to make it worthwhile to investigate whether there is a money trail from the very small number of companies for whom this policy is an enormous windfall to the pockets of Price or Gottlieb — it is not like there is no history of corruption there. It is probably also worth checking Zeller and company, though they are not the variable here, so that seems like a long-shot. And the Trump campaign, of course, though given that the White House has not managed to put a government in place, it would have been quite a coup to push down such detailed policy from that far up the chain.)

Meanwhile there was the recent paper from Glantz’s shop, elegantly shredded by Chris Snowdon, in which the authors feebly attempt to tar NRT (their nicotine) as part of the evil machinations of the cigarette industry in the 1990s. I won’t even try to explain — there is nothing remotely defensible about it; read Snowdon if you want details. The importance is that Glantz’s current role is as a paid surrogate for FDA. This cannot be coincidence. FDA and tobacco control cannot comfortably fight a war on nicotine when the nicotine-iest products out there are their products that they have always embraced. So they need to muddy the waters round those products. What better way than to manufacture retroactive innuendo that NRT always was a brilliant cigarette industry plot that the hapless tobacco controllers fell for, and not the colossal screw-up on their own part that it was? That exact ploy has worked for them before.

FDA’s Center for Tobacco Products has always been a propaganda shop (they have certainly never been a real regulator). But previously their propaganda was lame pointless messages pitched at ignorant consumers (who do not even know CTP exists, let alone see their messaging), perhaps to provide memes for their useful idiots to publish (and, again, not actually be seen by anyone in their target audience). The current effort is different in terms of both volume and apparent purpose. You can see the volume by checking out the Twitter feeds of @FDATobacco and FDA Director @SGottliebFDA, and also see the content there and by following the links.

This is not the usual background noise of silly anti-tobacco propaganda. This is a clear example of a fixture in the U.S. political system: a concerted push by a government faction to sell their policy. (The most recent high-profile example of this was from the faction trying to destroy the Affordable Care Act.) The target audience for this includes lazy reporters, who will just transcribe the propaganda and get a free byline, and influential pseudo-experts (aka, useful idiots) who do not know enough to not believe everything they read. The general public, the apparent target of CTP’s previous propaganda, is at most an afterthought as an audience. But the most important audience for these propaganda efforts are others in government, or who have similar levels of policy-making influence, trying to persuade those on the fence and to bludgeon those who might oppose the policy.

For example, there was this from NCI (part of the National Institutes of Health, which along with FDA is part of HHS) that came out just after the Glantz propaganda dropped and as it was being touted by FDA and their surrogates.

The cabal at FDA will find it hard to run a full-on War on Nicotine if NCI actively opposes them. Similarly, there are presumably a lot of tobacco controllers further down in government, and in political organizations, who still embrace the old (correct) notion that nicotine — especially their nicotine — is not the problem. Most of them are just puppets, and will dutifully recite that we have always been at war with Eastasia …er, with nicotine, as soon at they get the message. Others can simply be silenced by the deluge from the agency that has more money than the rest of tobacco control combined. That is the playbook for this kind of inward-directed propaganda.

And so we have, for the first time, an actual War on Nicotine. Note that this does not mean the whole war can be relabeled The War on Nicotine for reasons noted above. This is just part of it. We are still stuck with “War on Tobacco (etc.)” for the larger effort unless someone can come up with something better.

Some commentators who focus only on e-cigarettes appear unaware of what is really happening. Gottlieb and FDA substantially delayed the implementation of the stealth ban on e-cigarettes and have made various noises about embracing e-cigarettes as a low-risk alternative to smoking. So, hey, everything looks good for e-cigarettes!! Some of those commentators have even bought into the FDA propaganda that they FDA policies support harm reduction (at utterly Orwellian claim which I will address in my next post or you can check out my Twitter thread). However, since e-cigarettes are basically a nicotine delivery device, how can there be both a war on nicotine and a more pro-ecig policy?

Indeed, how?

One possible explanation is that FDA is signaling a plan to shift toward the position of British tobacco controllers who have seized control of the vaping mindspace there, intending to use e-cigarettes as just another weapon against smoking and smokers. That playbook involves keeping just enough of a boot on vaping to keep it from being accepted as a normal personal choice (it is only a smoking cessation medicine!), and staying in a position to squash it when supporting it becomes no longer politically expedient.

It could be that. But I find it genuinely hard to find that explanation in what we are seeing.

The two messages are simply too flatly contradictory. It is not exactly novel to see messaging from governments that includes policy proposals alongside stated support for goals that are antithetical to those policy proposals. Especially from this government and from this agency — after all, we heard basically the same happy talk about e-cigarettes even as FDA was marching toward a total ban as rapidly as they could. Obviously anyone other than lazy reporters and political actors who are looking for plausible deniability when they fall in with their faction’s bad policies should focus on the policy, not the contradictory happy talk.

But many do not. Thus this happy talk serves the rather obvious purpose of getting e-cigarette advocates — the most vocal and potentially politically effective opponents of a new War on Nicotine — to sit on their hands until the actual policy goal (whatever its crazy or corrupt motivation) has enough momentum. So we can expect no overt anti-ecig actions by FDA for a while. They still will not approve any new products (so there will be those temporarily grandfathered into minimal paperwork in 2016, and an a high-paperwork maybe-denial grey-zone for later products, still leading to the full-ban in 2022) or allow any merchant claims about the low risk. They are just pausing, not retreating. They might withdrawal their proposed de facto ban of most smokeless tobacco, issued under the guise of being a health and safety regulation, though frankly that would probably only be because it will never survive judicial review (smokeless tobacco and harm reduction advocates are a much smaller voice than e-cigarette advocates).

But if they gain momentum for their War on Nicotine policy, things will probably go downhill quickly. Implementing a substantive policy (for the first time ever) will empower FDA to go ahead and fight the e-cigarette advocates they temporarily appeased. It seems impossible that a hugely impactful and crazy expensive policy of cigarette nicotine reduction, for the chiiiildren, will not spawn limits on e-cigarette nicotine density and “child friendly” flavors. With the delay of the full-on e-cigarette ban they no longer have the luxury of not even trying to actually regulate the products; FDA will be wanting to hurt vapers through other means.

If the proposed policy is quashed things are a bit harder to predict. Perhaps FDA will be shy to take on more fights. Perhaps there could be a real change of heart, but it would be the height of foolishness to read that into the same old rhetoric. Perhaps the political party that controls our government is really so deeply dedicated to consumer welfare and free choice, as some advocates seemed to think before the election, and they will clean house at CTP and change its direction (haha — kidding, of course — if that turns out to be the case, I vow to print this out and eat it).

But it seems most likely we would still see e-cigarette “regulation” that serves only as harassing partial bans as soon as they are no longer all-hands-on with their current policy. That is consistent with everything they have done so far. Moreover, it seems especially difficult for them to walk back on e-cigarettes after campaigning for a War on Nicotine for a year and convincing their useful idiots that we have always been at war with nicotine.