by Carl V Phillips
When this journal letter (i.e., short paper), “Estimating the population impact of e-cigarettes on smoking cessation in England” by Robert West, Lion Shahab, and Jamie Brown came out last year, most of us said “wait, wot?” The authors estimated that in 2014, about 16,000 English smokers became ex-smokers because of e-cigarettes (a secondary analysis offered 22,000 as an alternative estimate). But that year saw an increase of about 160,000 ex-smokers who were vapers in the UK (the year-over-year increase for 2015 versus 2014) according to official statistics. In addition, there were about 170,000 more ex-smokers who identified as former vapers. Since the latter number subtracts from the number of ex-smokers who are vapers in 2015 they need to be added back. So it appears that the year-over-year increase in English ever-vapers among ex-smokers appears to be nearly 200,000, after roughly adjusting for the different populations (England is 80% of the UK population). Thus West et al. are claiming, in effect, that the vast majority of people who went from smoking to vaping did not quit smoking because of vaping.
My calculation is rough, and for several reasons it may be a bit high (e.g., the measured points in 2015 and 2014 demarcate a year that falls slightly later in calendar time than 2014 itself, and the rate of vaping initiation was increasing over time). But we are still talking about well over 100,000 new ex-smoker vapers. Probably closer to 200,000. So this would mean that about 90% of new ex-smoker vapers either would have quit smoking that year even without vaping, had quit tobacco entirely and only later took up vaping, or are not “real quitters” (i.e., they were destined to start smoking again before they would “count” as having quit, which is not a well-defined definition, but the authors seem to use one year as the cutoff). This seems rather implausible, to say the least.
This is an extraordinary claim on its face given what we know about the advantages of quitting by switching, and more so given that more detailed surveys of vapers (example) show almost all respondents believe they would still be smoking had they not found e-cigarettes. It must be noted that most respondents to those surveys are self-selected vaping enthusiasts who differ from the average new vaper, and that a few of them might be wrong and would have quit anyway. But the disconnect is still far too great for West’s weak analysis (really, assumptions) to come close to explaining.
I never bothered to comment on the paper at the time it came out [beyond a minor mention here] because the methodology was so weak and the result so implausible that I did not think anyone would take it seriously. But the tobacco wars seldom meet a bit of junk science they do not like. In this case, Clive Bates asked me to examine the claim (and contributed some suggestions on this analysis and post) because some tobacco controllers have taken to saying “e-cigarettes only caused only 16,000 people to quit smoking in England! so we should just prohibit people from using them!”
The proper responses to this absurd assessment and demand, in order of importance, are:
- It would not matter if they caused no one to quit smoking. It is a violation of the most fundamental human rights to use police powers to prohibit people from vaping if they want to. People have a right to decide what to do with their bodies. Moreover, in this particular case, you cannot even make the usual drug war claims that users of the product are driven out of their minds and do not understand the risks and the horrible path they will be drawn down: Vaping is approximately harmless, most people overestimate the risks, and it leads to no horrible path. It is outlandish — frankly, evil — to presume unto oneself the authority to deny people this choice.
- But even if you do not care about human rights and only care about health outcomes or whatever “public health” people claim to care about, causing a “mere” 16,000 English smokers to quit, annually, is quite the accomplishment. There is no plausible basis for claiming any recent tobacco control policy has done as much. Since there is no measurable downside, this is still a positive. Also, the rate of switching probably could be increased further with sensible policies and truthful communication of relative risks.
- The rough back-of-the-envelope approach used in the paper could never provide a precise point estimate even if the inputs were optimally chosen. But the inputs were not well chosen. The analysis included errors that led to a clear underestimate. When a back-of-the-envelope result contradicts a reality check, we should assume that reality got it right.
So I am taking up here what is really a tertiary point.
Back of the envelope calculations
West et al. carried out a back-of-the-envelope calculation, a simple calculation based on convenient approximations that is intended to produce a quick rough estimate. It happens to have glaring errors, but I will come back to those. Crude back-of-the-envelope calculations have real value policy analysis. I taught students this for years. In my experience, when there is a “debate” about the comparative costs and benefits of a policy proposal, at least half the time a quick simple calculation show that one is greater than the other by an order of magnitude. The simple estimate can illustrate that the debate is purely a result of hidden agendas or profound ignorance, and also eliminate the waste of unnecessary efforts to make precise calculations.
When doing such an analysis, it is ideal if you get the same result even if you make every possible error as “conservative” as is plausible (i.e., in the direction that favors the losing side of the comparison). West’s analysis would thus be useful if it were presented as follows: “Some people suggest that the health cost from vaping experienced by new vapers outweighs the reduction in the health cost from smoking cessation that vaping causes. Even if we assume that vaping is 3% as harmful as smoking, the total health risk of additional vapers (the annual increase) would be the order of equivalent of the risk for about 5000 smokers. Our extremely conservative calculation yields in the order of 20,000 smokers quitting as a result of vaping. So even with extreme assumptions, the net health effect is clearly positive.”
But the authors did not claim to be offering an extremely conservative underestimate for purposes of doing such a calculation. They implicitly claimed to be providing a viable point estimate. And that requires a more robust analysis rather than rough-cuts, and best point estimates rather than worst-case scenarios. It also requires a reality check about what would have to be true if the ultimate estimate were true, namely that almost everyone who switched from smoking to vaping did not stop smoking because of vaping.
West’s estimation based on self-identified quit attempts
The crux of their calculation is the following: Their surveys estimate that 900,000 smokers self-identify as having attempted to quit smoking using e-cigarettes (please read this and similar statistics with an implicit “in this population, during this period” and I will stop interjecting it). They then assume that 2.5% of them actually did quit smoking because of e-cigarettes.
Where does the 2.5% come from? It is cited to, and seems to be based mainly on, the results of the clinical trials where some smokers were assigned to try a particular regimen of e-cigarettes; the 2.5% is an estimate of the rate at which they quit smoking above those assigned to a different protocol.
Before addressing the problems with using trial results, the second paper they cite as a basis for the 2.5% figure is one by their research group. How they got from that paper’s results to 2.5% is unfathomable. That paper was a retrospective study of people who had tried to quit smoking using various methods and found that those reporting using e-cigarettes were successful about 20% of the time, which beat out the two alternatives (unaided and NRT) by 5 and 10 percentage points. If they had used ~20% instead of ~2% their final result would have been up in the range that would have passed the reality check. So what were they thinking?
I cannot be certain, but am pretty sure. It appears they only looked at differences in cessation rates and not the absolute rates, so the 5 or 10 rather than the full 20. Several things they wrote make it clear this is how they were thinking. This is one of several fatal flaws in their analysis. There are two main pathways via which e-cigarettes can cause someone to quit smoking (which means it would not have happened without them): E-cigarette use can cause a quit attempt to be successful when that same quit attempt would not have otherwise been successful, or it can cause a quit attempt (ultimately successful) that would not have otherwise happened. West et al. are pretty clearly assuming that the second of these never happens. I am guessing that the authors did not even understand they were making a huge — and clearly incorrect — assumption here.
Causing quit attempts is a large portion of cases where e-cigarettes caused smoking cessation. Indeed in my CASAA survey of vapers (not representative of all vapers, but a starting point), 11% of the respondents were “accidental quitters”, smokers who were not even actively pursuing smoking cessation, but who tried e-cigarettes and were so enamoured that they switched anyway. Add to these the smokers who had vague intentions of quitting but only made a concerted effort thanks to e-cigarettes and probably about half of all quit attempts using e-cigarettes do not replace a quit attempt using another method. So if half the 900,000 made the quit attempt because of e-cigarettes and 20% succeeded, we have, right there, a number that is consistent with the reality check I proposed.
Of course they did not use that 20%, and it does seem too high. What they did was assume that 5% would have succeeded in an unaided quit attempt without e-cigarettes — and all the same people would have made that attempt — and so 7.5% (5%+2.5%) actually succeeded when using e-cigarettes. But if half never would have made that attempt then a full 7.5% of them should be counted as being caused to quit by e-cigarettes, which more than doubles the final result (“more than” because their final subtraction, below, would not double but should actually be reduced).
As for why they did not use that 20%, I suspect (though they do not say) that when looking at the numbers from that paper, West et al. focused not only on the differences (the error I just discussed) but on the “adjusted” rates of how much more effective e-cigarettes were than the other methods, which were considerably lower than the numbers I quoted from the paper above. This too is an error. Public health researchers think of “adjusting” (attempting to control for confounding) as something you just do, a magical ritual that always makes your result better. This perception is false for many reasons, but a particularly glaring one in this case: The adjusted number is basically the measure of how helpful e-cigarettes would have been, on average, if those who tried to switch to them had the same demographics as smokers using other cessation methods. Smokers who try to switch to e-cigarettes have demographics that predict they are more likely to succeed in switching than the average smoker. Of course they do! People know themselves (a fact that seems to elude public health researchers). The ones who tried switching were who they were; they were not a random cross-section of smokers. So it seems that West et al. effectively said “pretend that instead of self-selecting for greater average success, those who tried to switch were chosen at random, and instead of using the success rate for the people who actually made that choice, we will use instead the number that would have been true if they were random.”
[Caveat: The attempt to control for confounding could also correct for the switchers having characteristics that make them more likely to succeed in quitting no matter what method they tried. So some of the “adjustment” is valid — only for those who would have tried anyway — but much of it is not.]
That last point relates closely to the other “evidence” that was cited as a basis for that 2.5% figure, and appears to have dominated it: the clinical trials.
Clinical trials of smoking cessation are useless for measuring real-world effects of particular strategies when they are chosen by free-living people. At best they measure the effects of clinical interventions. But in this case, these rigid protocols are not even a good measure of the effect of real-world clinical interventions in which smoking cessation counselors try to most effectively promote e-cigarettes by meeting people where they are and making adjustments for each individual. I have previously discussed this extensively.
A common criticism that the trials directed subjects toward relatively low-quality e-cigarettes. That is one problem. More important, the trials and did not mimic the social support that would come from, say, a friend who quit smoking using e-cigarettes and is offering advice and guidance. The inflexibility of trials does not resemble the real-world process of trying, learning, improving, asking, and optimizing that real-world decision entail. Clinical trials are designed to measure biological effects (and even then they have problems), not complex consumer choices.
But it is actually even worse than that. A common failing in epidemiology is not having a clue about what survey respondents really mean when they answer questions. There is no validation step in surveys where pilot subjects are given an open-ended debriefing of how they interpreted a question and what they really meant by their answer. (I always do that with my surveys, but I am rather unusual.) So consider what a negative response to “tried to quit smoking with e-cigarettes” really means. If a friend shoved an e-cigarette into a smoker’s hand and said “you should try this”, but she refused to even try it, she would undoubtedly not say she tried to quit smoking with e-cigarettes. But in a clinical trial, if that were her assignment, she would be counted among those who used e-cigarettes to try quitting, thus pulling down the success rate.
If she tried the e-cigarette that was thrust at her, but did not find it promising, chances are that in a survey she would probably not say she tried quitting using e-cigarettes. (She might, but given the lack of any reporting about piloting and validation of these survey instruments, we can only guess how likely that is.) If she passed that first hurdle, of not rejecting e-cigarettes straightaway, but used them sometimes for a few days or weeks, she might or might not say she tried quitting using e-cigarettes. But if she actually quit using e-cigarettes, she would undoubtedly count herself among those who tried to quit using e-cigarettes. I trust you see the problem.
It is the same problem that is common in epidemiology when you read, say, that 20% of the people who got a particular infection died from it. This usually means that 20% of the people who got sick enough from it to present for medical care and get diagnosed died, but countless others had mild or even asymptomatic infections. Everyone in the numerator (died in this case, quit in the case of e-cigarettes) is counted but an unknown and probably very large portion of those in the denominator (got the infection, were encouraged to try an e-cigarette) are not. Clinical trial results are (at best) analogous to the percentage you would get if you did antibody tests in the population to really identify who got the infection. This turns out to be the right way to measure the percentage of infected who die. But then if you the applied that percentage to the portion who presented for medical treatment, you would be underestimating the number of them who would die. That is basically what West et al. did. Their 900,000 are those for whom e-cigarettes seemed promising enough to be worth seriously trying as an alternative, but they applied a rate of success that was (again, at best) a measure of the effect on everyone, including those who did not consider them promising enough to try.
This would be a fatal flaw in West’s approach even if the trials represented optimal e-cigarette interventions, providing many options among optimal products, and the hand-holding that would be offered by a knowledgeable friend, vape shop, or a genuine smoking cessation counseling efforts. They did not, and so underestimated even what they might have been able to measure.
As a final step, West et al’s approach debits e-cigarettes with an estimated decrease in the use of other smoking cessation methods caused by those who tried e-cigarettes instead. These are the methods that are believed to further increase the cessation rate above the unaided quitting that West debited across the board (the major error discussed above). We can set aside deeper points about whether estimates of the effects of these methods, created almost entirely by people whose careers are devoted to encouraging these methods, are worth anything. West et al. assume that those methods would have had average effectiveness had they been tried by those who instead chose vaping. They also still assume that every switching attempt would have been replaced by another quit attempt in the absence of e-cigarettes, as discussed above. This lowers their estimate from 22,000 to the 16,000. But a large portion of smokers who quit using e-cigarettes do so after trying many or all of those other methods, often repeatedly. Assuming those methods would have often miraculously been successful if tried one more time makes little sense.
As a related point that further illustrates the problems with their previous steps, recall that the 2.5% is their smoking cessation rate in excess of that of those who tried unaided quitting or some equivalently effective protocol. But it seems very likely that the average smoker who tries to switch to e-cigarettes has already had worse success with that other protocol than has the average volunteer for a cessation trial. This is the “I tried everything else, but then I discovered vaping” story. I am aware of no good estimate for this disparity, but if the average smoker who tried to switch were merely 1 percentage point less likely than average to succeed with the other protocol (e.g., because she already knew that it did not work for her), then the multiplier should have been 3.5% (7.5%-4% rather than 7.5%-5%). This is trivial compared to the error of using the incredibly low estimated success rate suggested by the trials in the first place, of course, but that little difference alone would have increased West’s estimate by 40%. This illustrates just how unstable and dependent on hidden assumptions that estimate is, even apart from the major errors.
Returning to the reality check
But lest we get lost in the details, the crux is still that West implicitly concluded that the vast majority of those who switched from smoking to vaping did not quit smoking because of vaping. The authors never reflect on how that could possibly be the case. They do, however, offer an alternative analysis, in what are effectively the footnotes, that gives the illusion of responding to this problem without actually doing so. They write:
The figure of approximately 16,000–22,000 is much lower than the population estimates of e-cigarette users who have stopped smoking (approximately 560,000 in England at the last count, according to the Smoking Toolkit Study). However, the reason for this can be understood from the following….
What follows is even weirder than their main analysis.
West’s “alternative” analysis
They actually start with that 560,000. That is inexplicable since it is possible to estimate the year-over-year change in 2014, as I did, rather than working with the cumulative figure. The 560,000 turns out to be well under half what you get if you add the current vapers and ex-vapers among ex-smokers from the statistics I cite above. So their number already incorporates some unexplained discounting from what appears to be the cumulative number. But since I am baffled by this disconnect, I will just leave this sitting here and proceed to look at what they did with that number.
As far as I can understand from their rather confusing description of their methods here, their first step is to eliminate those who were already vaping by 2014, and thus did not switch in 2014. That makes sense, though it would have been easier to just start with that. When they do this, they leave themselves with 308,000. So they started with something much lower than what you get from the statistics I looked at, and ended up with something that is half-again higher than the rough estimate from those statistics. Um, ok — just going to leave that here too. But the higher starting figure makes it even more difficult for them to explain away the reality check.
Their next step is the only one that seems valid. They estimate that 9% of ex-smokers who became vapers did so sometime after they had already completely quitting smoking, and subtract them. This is plausible. An ex-smoker who is dedicated to never smoking again still might see the appeal of consuming nicotine in a low-risk and smoking-like manner again. (Note that this should be counted as yet another benefit of e-cigarettes, giving those individuals a choice that makes them better off, even though the “public health” types would count it as a cost because they are not being proper suffering abstinents. It might even stop them from returning to smoking.)
Of course, this only makes a small dent. So where does everyone else go? Most of them go here:
It has to be assumed on the basis of the evidence [6, 7] that only a third of e-cigarette users who stopped smoking would not have succeeded had they used no cessation aid
It is assumed that, as with other smoking cessation aids, 70% of those recent ex-smokers who use e-cigarettes will relapse to smoking in the long term 
This takes them down to 28,000.
Taking the latter 70% first, any limitations in relying on a single source for this estimate (another West paper) are overshadowed by: (a) There is no reason to assume switching to vaping will work as poorly, by this measure, as the over-promising and under-delivering “approved” aids that fail because they do not actually change people’s preferences as promised. Indeed, there is overwhelming evidence to the contrary. (b) Many of those in the population defined by “started vaping that year and were an ex-smoker as of the end of the year” have already experienced a lot of the “long term”. That is, if we simplify to the year being exactly calendar 2014, some people joined that population in December, and thus a (correct, undoubtedly much lower than 70%) estimate of the discounting between “smoking abstinent for a week or two thanks to e-cigarettes” and “abstinent at a year” (a typical measure for “really quitting” as noted above) is appropriate. But some joined the population in January and are already nearly at the long term. On average, they will have been ex-smokers for about six months, and being abstinent at six months is much better predictor of the long run than the statistic they used (which, again, is wrong to apply to vaping). Combining (a) and (b) makes it clear that this is a terrible estimate.
As for the first of those major reductions, references 6 and 7 do not actually provide any reason that “only a third…has to be assumed”. Those are the same references they cite for the 2.5% above. So this is just a reprise of the 2.5% claim, and suffers from the same errors I cited above.
You see what they did there, right? The reality check I offered is “your results imply that 90% of new ex-smoker vapers did not quit because of vaping; can you explain that?” Either anticipating this damning criticism or by accident, they provided their answer: “Yes, we assume — based on nothing that remotely supports the assumption — that a majority of them did not really quit and a majority of those who did would have quit anyway (and 9% were already ex-smokers, and some other bits).”
This step basically sneaks in the same fatal assumptions from their original calculation but is presented as if it offers an independent triangulation that responds to the criticism that their original calculation has implausible implications. Here is a pretty good analogy: Someone measures a length with a ruler that is calibrated wrong by a factor of ten. They are confronted with the fact that a quick glance shows that their result is obviously wrong. So they make a copy of their ruler and “validate” their results with an “alternative” estimation method.
Oh, and at the end of this they knock off another 6000 based using what appears to be double counting, but at this point who really cares?
Their first version of the estimate is driven mainly by their assumption that attempting to switch to vaping is close to useless for helping someone quit smoking compared to unaided quitting, and also that all those who attempted to switch would have tried unaided quitting in the absence of e-cigarettes. There are also other errors. Their second version is based on the “reasoning” that because we have assumed that attempting to switch to vaping is close to useless, it must be that most of those who we have observed actually did switch to vaping must have not really quit smoking because of vaping — and so (surprise!) approximately the same low estimate.
So nowhere do they actually ever address the reality check question:
Seriously? You are claiming that almost everyone who ventured into one of those weird vape shops, who spent hundreds of pounds on e-cigarettes, who endured the learning curve for vaping, who ignored the social pressure to just quit entirely, and who decided to keep putting up with the limitations and scorn they faced as a smoker and would still face as a vaper, that almost all of them were someone who was going to just quit anyway? You are really claiming that almost all of them said, “You know, I think I will just quit buying fags this week — oh, wait, you mean I instead could go to the trouble to learn a new way of quasi-smoking and spend a bunch of money on new stuff and keep doing what I am doing it even though I am really over it and ready to just drop it? Where do I sign up?” Seriously?
Reality. Check. Mate.
For what it is worth, if you asked me to do a back-of-the-envelope estimate for this, I would probably go with something like the following:
There were about 200,000 new vaping ex-smokers. It seems conservative to assume that about half of them quit smoking due to vaping. 100,000. Done.
That is obviously very rough, and the key step is just an educated guess. But an expert educated guess is often far better than fake precision based on obviously absurd numbers that just happen to have appeared in a journal (as a measure of something — in this case, not even the right thing). In this case, it has far better face validity than West et al.’s tortured machinations.
[Update, 4 Oct:
Since this was posted, two other flaws in the West analysis have become apparent. The first come from my Daily Vaper article which was based on the lessons from this, a terse presentation in the many ways in which vaping causes smoking cessation. That is worth reading in its own right if you are interested in this stuff. What occurred to me when writing that was that I was too charitable in just saying “ok fine” about the dropping of all ex-smokers who had become vapers after already quitting smoking. For some of them, taking up vaping caused them to not return to smoking. So a few of them should actually be counted. (One might make the semantic argument that the claim is about how many were caused to quit, not how many were caused to be (i.e., become or remain) ex-smokers, so they really do not count. But it is still worth mentioning.)
The second flaw came up in the comments, thanks to Geoff Vann. He figured out an internal inconsistency in the West approach. Basically, if their base methodology (assumptions, etc.) is applied to their step that removed the established vaping ex-smokers from that 560,000, it turns out that you cannot remove nearly as many as they do remove. You can see the details in the comment thread. Internal inconsistencies are always interesting because even if someone denies the criticisms from external knowledge and analysis — which are really far more damning — they cannot complain about being held to their own rules!
Thanks for this Carl. Like you West’s figures never made sense to me. I will still have to reread this to properly understand (numbers aren’t my thing).
But one thing I would like to add that you didn’t include was peer pressure. As a vape shop owner I know that peer pressure has quite a strong influence especially on my younger customers. People tend to switch in groups. Especially amongst the younger crowd as well as office workers. There is a kind of tipping point in the office where I think once you hit 3 vapers all the smokers start to feel the odd one out so decide to join in.
So this is one more thing that clinical trials can never measure. So what about peer pressure? How many of those who switched did so simply through peer pressure and not out of any real desire to give up?
And while I am here, what about people like me who made the switch not because I wanted to give up smoking, but because I wanted to keep smoking? Vaping offered me the near perfect win win.
You can think of the peer pressure / critical mass effect as being part of the “social support” concept that I mention but do not detail. I have previously delved into the details of it. However, I am not sure I have ever specified that particular effect. It is worth mentioning specifically and I will keep it in mind. The lack of some “get on board” and “hey, look, this IS normal” effect in the trials — where someone is isolated — can have a huge effect. Moreover, this is a major contributor to why switching is a more robust (more likely to stick with it) method of quitting than others, as I discussed in the second half.
I tried to allude to the not wanting to quit quitters, but again you offer an important nuance that I did not. Even if someone is not an “accidental quitter”, and made a concerted effort, it may not be that he merely would not happen to have made a concerted quit attempt that year. He made not have had any interest at all in making a quit-to-abstinence attempt.
Drs. West et al., should feel compelled to respond to this devastating critique, although I doubt an effective rebuttal is possible. Dr. West is a psychologist and Professor of Health Psychology and Director of Tobacco Studies at University College London. I’ve just read some of his other published material and he doesn’t seem like an idiot or kook. In fact, judging from his blog he seems perfectly rational. http://www.rjwest.co.uk/blog.php
So I really don’t know why or how he managed to screw this up so badly.
No, not an idiot, a kook, or a complete extremist. But unfortunately stuff like this is business as usual in public health research, where there is almost never any serious review and certainly no professional prices to pay for doing bad science. So it becomes easy to do. I am guessing the authors genuinely think there is nothing wrong with doing work like this.
I would welcome engagement. Indeed, I would say there is a decent chance that somewhere in my analysis I got something wrong — I was working fairly blind, after all, since they did not report most of their methods. But I am completely confident that my main points, and of course the overwhelming implications of the reality check, will hold up.
West has always been on the side of the ‘assisted/supported quitting is more effective’ argument and tends to want to keep that infrastructure. He seems a decent and honest man but does not want people in the tobacco control industry to lose their jobs,
I suppose the follow-on questions are:
a)what damage has been done by this study since the”~20,000″ number first appeared at the London ecig summit in 2014(“A lie can travel halfway around the world before the truth can get its boots on”)?
b)why Clive didn’t attempt a similar critique himself in 2015(is this much different from overstating formaldehyde)?
c)why NNA & other pro-vaping groups were actively supporting the 16-22k number when much-publicised ASH surveys were showing numbers in a different ball-park?
d)why a doyen of Tobacco Control would do this?
e)why this doyen “doubled-down” on these 2014 numbers in a subsequent study using an Arimax technique on the 2015 ecig quit numbers?
f)what can be done to correct this error?
Thanks for numbering. It helps. Even with letter :-)
a) I really have no idea. It is obviously pretty much impossible to know. However, it is probably easier to know than for some traveling lies. This is unlikely to have influenced consumers (it is some arcane statistics which does not necessarily even sound low to the average person), as do, say, formaldehyde lies. So it mostly comes down to policy debates where its influences might be found in the record. So there is Australia. Someone there would have to judge how much harm it did.
b) Yes, this is MUCH different from criticizing the formaldehyde junk. Any reasonably bright person can get up to speed on that by following the issue and looking a few things up. I have never studied chemistry, other than in passing, and I can write the critiques. Hundreds of bright and attentive vapers have written cogent criticisms. There are probably a hundred thousand organic chemists and occupational health experts in the world who, if asked to look at it, could get up to speed in half an hour and critique it. Not all of them could do as good a job as, say, Igor Burstyn, but it is easy so that level of critical thinking is really not needed.
By contrast, dissecting a piece of junk epidemiology like I just did is hard. I have a very particular set of skills. It may look easy after I have already done it, but there are probably only a handful of people who could have done what I just did. None of the others are in this realm. There are quite a few people who can pick away at some of the obvious errors, but I would be totally bowled over to see an analysis that covered half of what I did by anyone else. Consider: When a bad epi paper comes out and the West/Bauld/etc crowd write their “science press responses” page (or whatever it is called) responses to it, the errors in those are almost as dense as those in the original paper. They usually get a few low-hanging criticisms correct, but they get a lot of stuff badly wrong.
Clive could not have done this. He knows that. So he employed a different skill (one that turns out to get you much further in life), being able to figure out who can do it and persuading him to do it. There is a level of skill required to figure out who really has the most skill in a technical area. He has that much skill at forensic epidemiology, making him one of the best in our realm, and letting him figure out that I could write this. This is also why those others can get away with the above “science press” responses — people reading them do not even know enough to know that they should not be trusting forensic epidemiology from those authors.
c) I have no idea. It would be interesting to ask them. Maybe I will write a Daily Vaper story.
d) Oh, that’s easy. Because tobacco control (and this type of public health more generally) is a realm of junk science. That is not just general aspersions here — it really is the reason. They are so insulated from good science and any review of their work that they just do stupid shit without it even crossing their minds that they won’t get away with it. Or even that they are getting away with it. Insulated from scientific criticism, they never learn to get better. They don’t even realize that what they are doing is bad and that there is a better.
In this realm, someone gets credit for being a doyen by doing glorified tech work (combined with politicking and admin work). They get control of a budget and have their people do surveys, which they then spit out results from. They do not even have to understand their own data — in this case, I think I made it pretty clear they do not. Fame and fortune in this realm is not correlated with scientific skill.
And then there is the other angle for a possible explanation, presented here in the comments by David Allan: He has financial and political incentive to make sure tobacco controllers keep their jobs and people keep using their lousy smoking cessation methods.
e) I am not familiar with the story, but I would guess it was because he genuinely did not understand how badly he screwed up the first time. (If he does it again, we will know that it is something more nefarious than that, but what came before now might have been genuine ignorance rather than malice.)
f) I just did it! Weren’t you paying attention? ;-)
To explain my tweets earlier today – and just looking at the alternative analysis(as above) or para 9 in the ‘Comments and caveats’ section of the paper – it starts with a figure of 560k ecig users who have stopped smoking at the last count (it does not specify a date and I am not aware that such a total has been published before or since by the STS team.
Of these,(308k – 28k) 280k are identified as have been ex-smoking vapers for 1 year is estimated at 252k.If the 1 year quit rate for ecig users is as estimated in the paper as 7.5% then it would require a minimum of ~3.4m ecig quit attempts in the period up to and including 2013.Multiplying out the figures from the monthly STS powerpoint presentations gives a total for these attempts as ~ 1150k.
This may not make much sense unless you are familiar with the STS outputs!
Thanks for that. I now realize I *definitely* did not understand the tweets. So let me see if I understand by writing in my own words what I think you are saying:
This is yet another flaw or inconsistency in what they did. If they are saying that almost a million vaping-based quit attempts only yielded a few tens of thousands of successes, then how could we have possibly have seen so many vaping ex-smokers, as they subtracted out as having been pre-1994. The number of attempts to get that would have been implausibly astronomical.
Assuming I got that right, I think you are definitely onto something I missed. I believe they could respond (offering what would be legitimate reasoning if you accept the premise of the paradigm they created) as follows: That is not just 2013, but all previous years. Also 9% of those ex-smoking vapers took up vaping after already quitting smoking.
However, none of their other debiting seems to apply. So that still would require 3 million cumulative ecig quit attempts through 2013. This does indeed seem incredibly unlikely given less than 1 million in 2014 and the steep growth of popularity (2013 would have been less than 2014, 2012 less still, and 2011 let alone 2010 getting pretty rare). More than tripling the 2014 number — impossible.
So this appears to be an internal inconsistency in their paradigm, which is always a powerful criticism. It does not require agreeing with my assessments that input numbers are badly wrong. It does not require recognizing that they missed an important causal pathway. What they did just does not work within its own story.
I concede I might be missing something here. I just spend only a fraction of the time thinking this through that I spent on the key points in the blog. But I am not seeing any holes in your (very damning) reasoning.
P.S. Also this probably does not even account for those who went from smoker to vaper to nothing, who drop out of the ex-smoker vaper pool, as I noted in the first bit of my analysis. This makes it even worse.
Yes-thank you – I’m better with numbers than words!The STS is West’s personal ‘fiefdom’ – he who controls the data controls the debate(mainly!) – and in the world of political activism,data is a tool not a window.
Without in any way wishing to give you more work,you might cast an eye over the sequel
Yeah, I think I will pass. Anyone who cares about accuracy, and has enough understanding to recognize simple fatal flaws when they are pointed out (which apparently does not include him), can already see not to trust his analysis.
Data, along with analysis, is indeed mostly used for support rather than illumination in public health. However, controlling data does not give you control of the debate. Witness what appears here. However, it does cause people who do not understand that data does not just magically produce knowledge (e.g., most science reporters) to be far too deferential, which has the same effect as controlling the debate.
Very thorough, interesting and illustrative dissection of the West et al study. It really sheds light on how unwarranted assumptions lead to such a meager 16 thousand figure of “English smokers becoming ex-smokers thanks to e-cigarettes”, given the much larger demographic of vaping ex-smokers in the UK.
What is your opinion of other studies on the demographic effects of e-cigarettes ? For example: (1) the cross sectional study
“Prevalence of population smoking cessation by electronic cigarette use status in a national sample of recent smokers”, Daniel P.Giovenco & Cristine D.Delnevo. doi: https://doi.org/10.1016/j.addbeh.2017.08.002
and (2) the longitudinal (or perhaps quasi longitudinal) study based on the US Current Population Survey-Tobacco Use Supplement (CPS-TUS)
E-cigarette use and associated changes in population smoking cessation: evidence from US current population surveys. Shu-Hong Zhu, Yue-Lin Zhuang, Shiushing Wong, Sharon E Cummins, Gary J Tedesch. doi: https://doi.org/10.1136/bmj.j3262
Pingback: Note to readers: look for me at @TheDaily_Vaper | Anti-THR Lies and related topics
Pingback: Peer review of: Linda Johnson et al. (Washington U med school), E-cigarette Usage Is Associated with Increased Past 12 Month Quit Attempts and Successful Smoking Cessation in Two U.S. Population-based Surveys, NTR 2018. | Anti-THR Lies and related topics
Pingback: Sunday Science Lesson: phenomena and measurement | Anti-THR Lies and related topics