by Carl V Phillips
As you have probably already heard, a new clinical study reported in the Lancet found that in the artificial context of a smoking-cessation clinic — using the very odd population of people who go to such clinics — when people were offered an inadequate [Update: possibly inadequate -- see * footnote] quantity of low-quality e-cigarettes, they became abstinent from smoking (perhaps temporarily) at about the same rate as those offered nicotine patches. Woo hoo.
[*I have now seen it reported by a study author that participants had access to as large a quantity as they wanted (gee, you think they might have bothered to mention that in the study methods) despite the observation that participants consumed less than is usually an adequate as a substitute for someone just quitting smoking. Perhaps they were just not properly advised about how much to try to use or they were scared to use much, given that they are banned where the study took place. Whatever the reason for it, inadequate quantity of consumption probably reduced the rate of switching.]
My title for this post is way too long, but I still had many other phrases or thoughts I wanted to fit into it, including:
damning with faint praise
social phenomena cannot be effectively studied in a clinic
designed to fail (presumably unintentionally), though not quite enough to manage to fail
and, perhaps most important,
ivory-tower researchers seem to think that a well-established fact, something that everyone who is paying attention to all of the evidence already knows, is not true until they can show it in one of their artificial experiments (or, as the joke goes, “ok, that works in the real world, but let’s see if it actually works in theory”)
I realize that some e-cigarette advocates have embraced it as good and exciting news, but I would suggest not getting too excited, for a few reasons. First, this study does not actually address the real-world phenomenon of THR. Real THR does not consist of shoving one particular option into the hands of people and saying “do this rather than smoking”. Even when that particular option might be ideal for some people (which seems not to be true in this case), it is not ideal for everyone. Imagine a hypothetical world in which one kind of e-cigarette was more satisfying than smoking for 10% of smokers, another kind for a different 10%, and snus for another 10% (and that no one else liked those products at all). In that scenario, we can help 30% of all smokers quit and also be happier for having done so. But any trial that tried to force one of those on people would show that it fails 90% of the time.
Second, the results were within the margin of statistical error from the news media being blanketed with a report that said “new study shows e-cigarettes do not work as well as those wonderful ‘approved’ nicotine patches, so there is no reason to allow them on the market.” In case the enormous importance of that little bit of random good luck is not clear, let me explain. In the study, e-cigarettes did a bit better than patches, but the difference was less than “statistically significant”, which basically means “the fact that e-cigarettes did a bit better rather than a bit worse is quite conceivably just due to luck of the draw; a repetition of the exact same study might well reverse the order.”
So, a little bit of different random noise in their results, and e-cigarettes would have performed a bit below patches rather than a bit above them in terms of causing smoking abstinence. Had that occurred, this study would be making headlines as yet another reason to ban e-cigarettes. And instead of the press embargo being released after business hours on Friday (you are probably not aware, but the journal employed that classic tactic to minimize press coverage of an announcement) it would have been released on a Tuesday morning so it could headline all the health news sections that week.
As I noted in the title, this time it worked out. But it next time it might not.
And “worked out” is pretty faint praise in this case. Nicotine patches are a fairly worthless product. They work well for some never-smokers I know who use them for performance enhancement; they are pretty good for that because they deliver a constant dosage of nicotine, making them kind of like sipping coffee all day, which is perfect for some people. But history tells us they are pretty useless for quitting smoking (and you do not need any clinical trials to tell you that: in the USA, they have existed, they have been heavily touted, and yet smoking rates have basically just tracked the uptake of low-risk alternatives — enough said). Indeed, the claim that e-cigarettes merely worked just as well as patches (or even a little better) is flatly contrary to everything we know about e-cigarettes and how well they work. This story does not provide further evidence that e-cigarettes work; it implies that they do not work as well as we know they do!
Instead of interpreting the study as important news, it is more useful to view the interpretations of the results as misguided science. That avoids the problem of buying into a bad scientific paradigm that is ultimately bad for THR. This is exactly the junk-science interpretation of what constitutes evidence that has been used to deny the overwhelming evidence about THR for the last decade. Yes, it is nice to be able to respond to those who play this game by saying “ha, we have a study too” — but it is just one and it is a pretty weak result. Better to focus on fighting the ANTZ’s repeated denial that other evidence is what is useful, which becomes harder if we implicitly endorse the denial when it is convenient.
Clinical trials are simply not a useful way to evaluate whether a consumer product is attractive to consumers for the reasons cited above: the artificial setting, the unrepresentative people, and the inevitable limited range of options. Disappearance data alone (the quantity of the product sold to consumers) tells us far more than any clinical trial ever could. Using clinical trials where they are not useful — let alone claiming that they are more useful than the better evidence — is mistaking their tools for the goals. The fetishizing of the tool of clinical trials (useful for some things but not everything) reminds me two-year-old with a toy wrench playing “fix it”: “wrenches are used to fix things, I am applying my wrench to an object, therefore I am fixing it.”
People who might be happy with a nicotine patch are not the target for e-cigarettes. Even less so are those who go to a clinic looking for some magic bullet that will make them not want to smoke (see my series on second-order preferences to understand what they really want and why they are never going to get it). E-cigarettes work best for the large portion of smokers who have become comfortable with (or resigned to) the fact that they want to keep smoking — or, of course, to do something that is a fully-satisfying substitute.
And all this is to say nothing of the fact that the study report makes clear that the smokers were not given nearly enough e-cigarettes to provide an acceptable substitute (thus intentionally and inappropriately imitating the inadequacy of the patch) and the products were of such low quality that they kept failing. That is, the study was not a very good picture of what would happen in a clinical setting if you were really trying to get people to switch to e-cigarettes.
In fairness to the authors of this study, no data is worthless if interpreted correctly. Better to have something rather than nothing. But that is a big “if”. The honest interpretation of this study should have been,
We know that e-cigarettes are proving to be a popular and effective method for quitting smoking in the real world and that no serious short-term side-effects have been found based on millions of observations. We do not know specifically how well e-cigarettes would work in clinical smoking-cessation setting, though the reasonable hypothesis would be “they would work better than the current practice”. This study confirms, as we already had every reason to believe, that even with really lousy products, e-cigarettes are better liked than nicotine patches. This suggests (again, as we already pretty much knew) that clinics that really want people to quit smoking should start offering e-cigarettes.
I hate to give the authors a hard time, because they were just doing their jobs as institutionally-constrained researchers (“must use hammer, so call everything a nail”), and were being vaguely pro-THR (though not so much as to risk offending the tobacco control industry, of course). Most of the blatant lies about this study are concentrated in the press release (and thus in the news reports) which the study authors did not write. However, if they had veto power over the content, as is likely the case, they share the blame.
The press release tried to portray this result as important and groundbreaking. Consider the following excerpts from it:
First trial to compare e-cigarettes with nicotine patches…”
Ok, fine, it is the first of those. Yawn.
…only the second controlled trial to be published which evaluates e-cigarettes, and is the first ever trial to assess whether e-cigarettes are more or less effective than an established smoking cessation aid, nicotine patches, in helping smokers to quit.”
I guess there are some bit of literal truth to be found there, but the overall message is very misleading. The reader is led to believe that this study tells us something new by conveniently ignoring the absolutely enormous quantity of evidence we have from sources other than trials. They might as well be saying “this is the first research done in New Zealand on this topic”.
It is also false that the previous clinical trial (presumably referring to the one by Polosa’s group, which found a lot of smokers who were not seeking to quit spontaneously switched to e-cigarettes), for all of its limitations, did not show e-cigarettes worked better than NRT products. Polosa’s result clearly demonstrated that e-cigarettes work better because we already knew how poorly NRT works. It did not matter than there was no comparison within the study — you do not have to show them both on the same map to conclude that New Zealand is further away from you than your corner pizza place, after all.
“Our study establishes a critical benchmark for e-cigarette performance compared to nicotine patches and placebo e-cigarettes…”
Nope. There is nothing critical about this result at all. As a benchmark it might have some value, telling us that even when you seem to be trying to make e-cigarettes fail in that setting, they still do better than NRT. And, of course, the concept of a “placebo e-cigarette” (which they called the zero-nic e-cigarettes that some subjects were assigned to) is silly; the benefits of an e-cigarette to someone trying to switch from smoking are not limited to the nicotine, and so there can be placebo nicotine but there is no such thing as a placebo e-cigarette. (Aside: when Polosa’s study came out, those who fetishize drug trial methodology attacked him for not including a placebo group, but merely nicotine and non-nicotine e-cigarettes. It will be interesting to see if they say the same when the study was done by their own people.)
The study is also the first to evaluate whether there are any adverse health effects associated with using e-cigarettes in a large (300+) group of people, and in real life, rather than a laboratory, situation.
Um, yeah, except for the slightly larger population of several million people who have used e-cigarettes in real real life. (Note, all the commas in that quote are in the original — I just wanted to point that out to my editors who complain that I use too many commas.)
There is one useful bit of information in the study, though it is pretty buried: The subjects who were assigned to e-cigarettes (either with or without nicotine) were enormously more likely to recommend them to other smokers than those assigned the patch were to recommend that. No shock there, obviously. But it turns out that we know relatively little about exactly how the social marketing of e-cigarettes plays out. Unlike the rest of the results (which are mere weak confirmations of what we already knew) this could be useful new knowledge.
Bottom line: The ivory-tower types need to do arcane artificial studies like this in order to advance their careers. Health science journals need to publish and tout them in order to try to claim that they are the source of knowledge and so people should buy what they are selling at an enormous profit. This does not mean that those of us who are interested in the truth should fall for their marketing. Much like the cigarette companies, they are trying to sell a product that has some benefits, but in this case is ultimately a poor choice compared to alternative methods of inquiry.
Sadly, all but a small handful of the ivory-tower types refuse to soil their hands by actually getting to know real people, THR product users. They use them as study subjects, but they never talk to them, let alone read their blogs and Facebook posts. If they did, they would not overstate the value of studies like this. The real science about what is happening in the world definitively demonstrates the value and success of THR. If we put our faith in artificial studies, however, we are just as likely to get results that contradict what we know as support it.