Tag Archives: Robert West

Sunday Science Lesson: Debunking the claim that only 16,000 smokers switched to vaping (England, 2014)

by Carl V Phillips

When this journal letter (i.e., short paper), “Estimating the population impact of e-cigarettes on smoking cessation in England” by Robert West, Lion Shahab, and Jamie Brown came out last year, most of us said “wait, wot?” The authors estimated that in 2014, about 16,000 English smokers became ex-smokers because of e-cigarettes (a secondary analysis offered 22,000 as an alternative estimate). But that year saw an increase of about 160,000 ex-smokers who were vapers in the UK (the year-over-year increase for 2015 versus 2014) according to official statistics. In addition, there were about 170,000 more ex-smokers who identified as former vapers. Since the latter number subtracts from the number of ex-smokers who are vapers in 2015 they need to be added back. So it appears that the year-over-year increase in English ever-vapers among ex-smokers appears to be nearly 200,000, after roughly adjusting for the different populations (England is 80% of the UK population). Thus West et al. are claiming, in effect, that the vast majority of people who went from smoking to vaping did not quit smoking because of vaping.

My calculation is rough, and for several reasons it may be a bit high (e.g., the measured points in 2015 and 2014 demarcate a year that falls slightly later in calendar time than 2014 itself, and the rate of vaping initiation was increasing over time). But we are still talking about well over 100,000 new ex-smoker vapers. Probably closer to 200,000. So this would mean that about 90% of new ex-smoker vapers either would have quit smoking that year even without vaping, had quit tobacco entirely and only later took up vaping, or are not “real quitters” (i.e., they were destined to start smoking again before they would “count” as having quit, which is not a well-defined definition, but the authors seem to use one year as the cutoff). This seems rather implausible, to say the least.

This is an extraordinary claim on its face given what we know about the advantages of quitting by switching, and more so given that more detailed surveys of vapers (example) show almost all respondents believe they would still be smoking had they not found e-cigarettes. It must be noted that most respondents to those surveys are self-selected vaping enthusiasts who differ from the average new vaper, and that a few of them might be wrong and would have quit anyway. But the disconnect is still far too great for West’s weak analysis (really, assumptions) to come close to explaining.

I never bothered to comment on the paper at the time it came out because the methodology was so weak and the result so implausible that I did not think anyone would take it seriously. But the tobacco wars seldom meet a bit of junk science they do not like. In this case, Clive Bates asked me to examine the claim (and contributed some suggestions on this analysis and post) because some tobacco controllers have taken to saying “e-cigarettes only caused only 16,000 people to quit smoking in England! so we should just prohibit people from using them!”

The proper responses to this absurd assessment and demand, in order of importance, are:

  1. It would not matter if they caused no one to quit smoking. It is a violation of the most fundamental human rights to use police powers to prohibit people from vaping if they want to. People have a right to decide what to do with their bodies. Moreover, in this particular case, you cannot even make the usual drug war claims that users of the product are driven out of their minds and do not understand the risks and the horrible path they will be drawn down: Vaping is approximately harmless, most people overestimate the risks, and it leads to no horrible path. It is outlandish — frankly, evil — to presume unto oneself the authority to deny people this choice.
  2. But even if you do not care about human rights and only care about health outcomes or whatever “public health” people claim to care about, causing a “mere” 16,000 English smokers to quit, annually,) is quite the accomplishment. There is no plausible basis for claiming any recent tobacco control policy has done as much. Since there is no measurable downside, this is still a positive. Also, the rate of switching probably could be increased further with sensible policies and truthful communication of relative risks.
  3. The rough back-of-the-envelope approach used in the paper could never provide a precise point estimate even if the inputs were optimally chosen. But the inputs were not well chosen. The analysis included errors that led to a clear underestimate. When a back-of-the-envelope result contradicts a reality check, we should assume that reality got it right.

So I am taking up here what is really a tertiary point.

Back of the envelope calculations

West et al. carried out a back-of-the-envelope calculation, a simple calculation based on convenient approximations that is intended to produce a quick rough estimate. It happens to have glaring errors, but I will come back to those. Crude back-of-the-envelope calculations have real value policy analysis. I taught students this for years. In my experience, when there is a “debate” about the comparative costs and benefits of a policy proposal, at least half the time a quick simple calculation show that one is greater than the other by an order of magnitude. The simple estimate can illustrate that the debate is purely a result of hidden agendas or profound ignorance, and also eliminate the waste of unnecessary efforts to make precise calculations.

When doing such an analysis, it is ideal if you get the same result even if you make every possible error as “conservative” as is plausible (i.e., in the direction that favors the losing side of the comparison). West’s analysis would thus be useful if it were presented as follows: “Some people suggest that the health cost from vaping experienced by new vapers outweighs the reduction in the health cost from smoking cessation that vaping causes. Even if we assume that vaping is 3% as harmful as smoking, the total health risk of additional vapers (the annual increase) would be the order of equivalent of the risk for about 5000 smokers. Our extremely conservative calculation yields in the order of 20,000 smokers quitting as a result of vaping. So even with extreme assumptions, the net health effect is clearly positive.”

But the authors did not claim to be offering an extremely conservative underestimate for purposes of doing such a calculation. They implicitly claimed to be providing a viable point estimate. And that requires a more robust analysis rather than rough-cuts, and best point estimates rather than worst-case scenarios. It also requires a reality check about what would have to be true if the ultimate estimate were true, namely that almost everyone who switched from smoking to vaping did not stop smoking because of vaping.

West’s estimation based on self-identified quit attempts

The crux of their calculation is the following: Their surveys estimate that 900,000 smokers self-identify as having attempted to quit smoking using e-cigarettes (please read this and similar statistics with an implicit “in this population, during this period” and I will stop interjecting it). They then assume that 2.5% of them actually did quit smoking because of e-cigarettes.

Where does the 2.5% come from? It is cited to, and seems to be based mainly on, the results of the clinical trials where some smokers were assigned to try a particular regimen of e-cigarettes; the 2.5% is an estimate of the rate at which they quit smoking above those assigned to a different protocol.

Before addressing the problems with using trial results, the second paper they cite as a basis for the 2.5% figure is one by their research group. How they got from that paper’s results to 2.5% is unfathomable. That paper was a retrospective study of people who had tried to quit smoking using various methods and found that those reporting using e-cigarettes were successful about 20% of the time, which beat out the two alternatives (unaided and NRT) by 5 and 10 percentage points. If they had used ~20% instead of ~2% their final result would have been up in the range that would have passed the reality check. So what were they thinking?

I cannot be certain, but am pretty sure. It appears they only looked at differences in cessation rates and not the absolute rates, so the 5 or 10 rather than the full 20. Several things they wrote make it clear this is how they were thinking. This is one of several fatal flaws in their analysis. There are two main pathways via which e-cigarettes can cause someone to quit smoking (which means it would not have happened without them): E-cigarette use can cause a quit attempt to be successful when that same quit attempt would not have otherwise been successful, or it can cause a quit attempt (ultimately successful) that would not have otherwise happened. West et al. are pretty clearly assuming that the second of these never happens. I am guessing that the authors did not even understand they were making a huge — and clearly incorrect — assumption here.

Causing quit attempts is a large portion of cases where e-cigarettes caused smoking cessation. Indeed in my CASAA survey of vapers (not representative of all vapers, but a starting point), 11% of the respondents were “accidental quitters”, smokers who were not even actively pursuing smoking cessation, but who tried e-cigarettes and were so enamoured that they switched anyway. Add to these the smokers who had vague intentions of quitting but only made a concerted effort thanks to e-cigarettes and probably about half of all quit attempts using e-cigarettes do not replace a quit attempt using another method. So if half the 900,000 made the quit attempt because of e-cigarettes and 20% succeeded, we have, right there, a number that is consistent with the reality check I proposed.

Of course they did not use that 20%, and it does seem too high. What they did was assume that 5% would have succeeded in an unaided quit attempt without e-cigarettes — and all the same people would have made that attempt — and so 7.5% (5%+2.5%) actually succeeded when using e-cigarettes. But if half never would have made that attempt then a full 7.5% of them should be counted as being caused to quit by e-cigarettes, which more than doubles the final result (“more than” because their final subtraction, below, would not double but should actually be reduced).

As for why they did not use that 20%, I suspect (though they do not say) that when looking at the numbers from that paper, West et al. focused not only on the differences (the error I just discussed) but on the “adjusted” rates of how much more effective e-cigarettes were than the other methods, which were considerably lower than the numbers I quoted from the paper above. This too is an error. Public health researchers think of “adjusting” (attempting to control for confounding) as something you just do, a magical ritual that always makes your result better. This perception is false for many reasons, but a particularly glaring one in this case: The adjusted number is basically the measure of how helpful e-cigarettes would have been, on average, if those who tried to switch to them had the same demographics as smokers using other cessation methods. Smokers who try to switch to e-cigarettes have demographics that predict they are more likely to succeed in switching than the average smoker. Of course they do! People know themselves (a fact that seems to elude public health researchers). The ones who tried switching were who they were; they were not a random cross-section of smokers. So it seems that West et al. effectively said “pretend that instead of self-selecting for greater average success, those who tried to switch were chosen at random, and instead of using the success rate for the people who actually made that choice, we will use instead the number that would have been true if they were random.”

[Caveat: The attempt to control for confounding could also correct for the switchers having characteristics that make them more likely to succeed in quitting no matter what method they tried. So some of the “adjustment” is valid — but only for those who would have tried anyway — but much of it is not.]

Clinical trials

That last point relates closely to the other “evidence” that was cited as a basis for that 2.5% figure, and appears to have dominated it: the clinical trials.

Clinical trials of smoking cessation are useless for measuring real-world effects of particular strategies when they are chosen by free-living people. At best they measure the effects of clinical interventions. But in this case, these rigid protocols are not even a good measure of the effect of real-world clinical interventions in which smoking cessation counselors try to most effectively promote e-cigarettes by meeting people where they are and making adjustments for each individual. I have previously discussed this extensively.

A common criticism that the trials directed subjects toward relatively low-quality e-cigarettes. That is one problem. More important, the trials and did not mimic the social support that would come from, say, a friend who quit smoking using e-cigarettes and is offering advice and guidance. The inflexibility of trials does not resemble the real-world process of trying, learning, improving, asking, and optimizing that real-world decision entail. Clinical trials are designed to measure biological effects (and even then they have problems), not complex consumer choices.

But it is actually even worse than that. A common failing in epidemiology is not having a clue about what survey respondents really mean when they answer questions. There is no validation step in surveys where pilot subjects are given an open-ended debriefing of how they interpreted a question and what they really meant by their answer. (I always do that with my surveys, but I am rather unusual.) So consider what a negative response to “tried to quit smoking with e-cigarettes” really means. If a friend shoved an e-cigarette into a smoker’s hand and said “you should try this”, but she refused to even try it, she would undoubtedly not say she tried to quit smoking with e-cigarettes. But in a clinical trial, if that were her assignment, she would be counted among those who used e-cigarettes to try quitting, thus pulling down the success rate.

If she tried the e-cigarette that was thrust at her, but did not find it promising, chances are that in a survey she would probably not say she tried quitting using e-cigarettes. (She might, but given the lack of any reporting about piloting and validation of these survey instruments, we can only guess how likely that is.) If she passed that first hurdle, of not rejecting e-cigarettes straightaway, but used them sometimes for a few days or weeks, she might or might not say she tried quitting using e-cigarettes. But if she actually quit using e-cigarettes, she would undoubtedly count herself among those who tried to quit using e-cigarettes. I trust you see the problem.

It is the same problem that is common in epidemiology when you read, say, that 20% of the people who got a particular infection died from it. This usually means that 20% of the people who got sick enough from it to present for medical care and get diagnosed died, but countless others had mild or even asymptomatic infections. Everyone in the numerator (died in this case, quit in the case of e-cigarettes) is counted but an unknown and probably very large portion of those in the denominator (got the infection, were encouraged to try an e-cigarette) are not. Clinical trial results are (at best) analogous to the percentage you would get if did antibody tests in the population to really identify who got the infection. This turns out to be the right way to measure the percentage of infected who die. But then if you the applied that percentage to the portion who presented for medical treatment, you would be underestimating the number of them who would die. That is basically what West et al. did. Their 900,000 are those for whom e-cigarettes seemed promising enough to be worth seriously trying as an alternative, but they applied a rate of success that was (again, at best) a measure of the effect on everyone, including those who did not consider them promising enough to try.

This would be a fatal flaw in West’s approach even if the trials represented optimal e-cigarette interventions, providing many options among optimal products, and the hand-holding that would be offered by a knowledgeable friend, vape shop, or a genuine smoking cessation counseling efforts. They did not, and so underestimated even what they might have been able to measure.

Final step

As a final step, West et al’s approach debits e-cigarettes with an estimated decrease in the use of other smoking cessation methods caused by those who tried e-cigarettes instead. These are the methods that are believed to further increase the cessation rate above the unaided quitting that West debited across the board (the major error discussed above). We can set aside deeper points about whether estimates of the effects of these methods, created almost entirely by people whose careers are devoted to encouraging these methods, are worth anything. West et al. assume that those methods would have had average effectiveness had they been tried by those who instead chose vaping. They also still assume that every switching attempt would have been replaced by another quit attempt in the absence of e-cigarettes, as discussed above. This lowers their estimate from 22,000 to the 16,000. But a large portion of smokers who quit using e-cigarettes do so after trying many or all of those other methods, often repeatedly. Assuming those methods would have often miraculously been successful if tried one more time makes little sense.

As a related point that further illustrates the problems with their previous steps, recall that the 2.5% is their smoking cessation rate in excess of that of those who tried unaided quitting or some equivalently effective protocol. But it seems very likely that the average smoker who tries to switch to e-cigarettes has already had worse success with that other protocol than has the average volunteer for a cessation trial. This is the “I tried everything else, but then I discovered vaping” story. I am aware of no good estimate for this disparity, but if the average smoker who tried to switch were merely 1 percentage point less likely than average to succeed with the other protocol (e.g., because she already knew that it did not work for her), then the multiplier should have been 3.5% (7.5%-4% rather than 7.5%-5%). This is trivial compared to the error of using the incredibly low estimated success rate suggested by the trials in the first place, of course, but that little difference alone would have increased West’s estimate by 40%. This illustrates just how unstable and dependent on hidden assumptions that estimate is, even apart from the major errors.

Returning to the reality check

But lest we get lost in the details, the crux is still that West implicitly concluded that the vast majority of those who switched from smoking to vaping did not quit smoking because of vaping. The authors never reflect on how that could possibly be the case. They do, however, offer an alternative analysis, in what are effectively the footnotes, that gives the illusion of responding to this problem without actually doing so. They write:

The figure of approximately 16,000–22,000 is much lower than the population estimates of e-cigarette users who have stopped smoking (approximately 560,000 in England at the last count, according to the Smoking Toolkit Study). However, the reason for this can be understood from the following….

What follows is even weirder than their main analysis.

West’s “alternative” analysis

They actually start with that 560,000. That is inexplicable since it is possible to estimate the year-over-year change in 2014, as I did, rather than working with the cumulative figure. The 560,000 turns out to be well under half what you get if you add the current vapers and ex-vapers among ex-smokers from the statistics I cite above. So their number already incorporates some unexplained discounting from what appears to be the cumulative number. But since I am baffled by this disconnect, I will just leave this sitting here and proceed to look at what they did with that number.

As far as I can understand from their rather confusing description of their methods here, their first step is to eliminate those who were already vaping by 2014, and thus did not switch in 2014. That makes sense, though it would have been easier to just start with that. When they do this, they leave themselves with 308,000. So they started with something much lower than what you get from the statistics I looked at, and ended up with something that is half-again higher than the rough estimate from those statistics. Um, ok — just going to leave that here too. But the higher starting figure makes it even more difficult for them to explain away the reality check.

Their next step is the only one that seems valid. They estimate that 9% of ex-smokers who became vapers did so sometime after they had already completely quitting smoking, and subtract them. This is plausible. An ex-smoker who is dedicated to never smoking again still might see the appeal of consuming nicotine in a low-risk and smoking-like manner again. (Note that this should be counted as yet another benefit of e-cigarettes, giving those individuals a choice that makes them better off, even though the “public health” types would count it as a cost because they are not being proper suffering abstinents. It might even stop them from returning to smoking.)

Of course, this only makes a small dent. So where does everyone else go? Most of them go here:

It has to be assumed on the basis of the evidence [6, 7] that only a third of e-cigarette users who stopped smoking would not have succeeded had they used no cessation aid

…and here:

It is assumed that, as with other smoking cessation aids, 70% of those recent ex-smokers who use e-cigarettes will relapse to smoking in the long term [11]

This takes them down to 28,000.

Taking the latter 70% first, any limitations in relying on a single source for this estimate (another West paper) are overshadowed by: (a) There is no reason to assume switching to vaping will work as poorly, by this measure, as the over-promising and under-delivering “approved” aids that fail because they do not actually change people’s preferences as promised. Indeed, there is overwhelming evidence to the contrary. (b) Many of those in the population defined by “started vaping that year and were an ex-smoker as of the end of the year” have already experienced a lot of the “long term”. That is, if we simplify to the year being exactly calendar 2014, some people joined that population in December, and thus a (correct, undoubtedly much lower than 70%) estimate of the discounting between “smoking abstinent for a week or two thanks to e-cigarettes” and “abstinent at a year” (a typical measure for “really quitting” as noted above) is appropriate. But some joined the population in January and are already nearly at the long term. On average, they will have been ex-smokers for about six months, and being abstinent at six months is much better predictor of the long run than the statistic they used (which, again, is wrong to apply to vaping). Combining (a) and (b) makes it clear that this is a terrible estimate.

As for the first of those major reductions, references 6 and 7 do not actually provide any reason that “only a third…has to be assumed”. Those are the same references they cite for the 2.5% above. So this is just a reprise of the 2.5% claim, and suffers from the same errors I cited above.

You see what they did there, right? The reality check I offered is “your results imply that 90% of new ex-smoker vapers did not quit because of vaping; can you explain that?” Either anticipating this damning criticism or by accident, they provided their answer: “Yes, we assume — based on nothing that remotely supports the assumption — that 70% of them would have quit anyway (and 9% were already ex-smokers, and some other bits).”

This step basically sneaks in the same fatal assumptions from their original calculation but is presented as if it offers an independent triangulation that responds to the criticism that their original calculation has implausible implications. Here is a pretty good analogy: Someone measures a length with a ruler that is calibrated wrong by a factor of ten. They are confronted with the fact that a quick glance shows that their result is obviously wrong. So they make a copy of their ruler and “validate” their results with an “alternative” estimation method.

Oh, and at the end of this they knock off another 6000 based using what appears to be double counting, but at this point who really cares?

Conclusions

Their first version of the estimate is driven mainly by their assumption that attempting to switch to vaping is close to useless for helping someone quit smoking compared to unaided quitting, and also that all those who attempted to switch would have tried unaided quitting in the absence of e-cigarettes. There are also other errors. Their second version is based on the “reasoning” that because we have assumed that attempting to switch to vaping is close to useless, it must be that most of those who we have observed actually did switch to vaping must have not really quit smoking because of vaping — and so (surprise!) approximately the same low estimate.

So nowhere do they actually ever address the reality check question:

Seriously? You are claiming that almost everyone who ventured into one of those weird vape shops, who spent hundreds of pounds on e-cigarettes, who endured the learning curve for vaping, who ignored the social pressure to just quit entirely, and who decided to keep putting up with the limitations and scorn they faced as a smoker and would still face as a vaper, that almost all of them were someone who was going to just quit anyway? You are really claiming that almost all of them said, “You know, I think I will just quit buying fags this week — oh, wait, you mean I instead could go to the trouble to learn a new way of quasi-smoking and spend a bunch of money on new stuff and keep doing what I am doing it even though I am really over it and ready to just drop it? Where do I sign up?” Seriously?

Reality. Check. (And mate.)

For what it is worth, if you asked me to do a back-of-the-envelope estimate for this, I would probably go with something like the following:

There were about 200,000 new vaping ex-smokers. It seems conservative to assume that about half of them quit smoking due to vaping. 100,000. Done.

That is obviously very rough, and the key step is just an educated guess. But an expert educated guess is often far better than fake precision based on obviously absurd numbers that just happen to have appeared in a journal (as a measure of something — in this case, not even the same thing). In this case, it has far better face validity than West et al.’s tortured machinations.

[Update, 4 Oct:

Since this was posted, two other flaws in the West analysis have become apparent. The first come from my Daily Vaper article which was based on the lessons from this, a terse presentation in the many ways in which vaping causes smoking cessation. That is worth reading in its own right if you are interested in this stuff. What occurred to me when writing that was that I was too charitable in just saying “ok fine” about the dropping of all ex-smokers who had become vapers after already quitting smoking. For some of them, taking up vaping caused them to not return to smoking. So a few of them should actually be counted. (One might make the semantic argument that the claim is about how many were caused to quit, not how many were caused to be (i.e., become or remain) ex-smokers, so they really do not count. But it is still worth mentioning.)

The second flaw came up in the comments, thanks to Geoff Vann. He figured out an internal inconsistency in the West approach. Basically, if their base methodology (assumptions, etc.) is applied to their step that removed the established vaping ex-smokers from that 560,000, it turns out that you cannot remove nearly as many as they do remove. You can see the details in the comment thread. Internal inconsistencies are always interesting because even if someone denies the criticisms from external knowledge and analysis — which are really far more damning — they cannot complain about being held to their own rules!

]

Advertisements