by Carl V Phillips
I have spent a lot of my career pointing out how the choice of research methods and statistical models — in particular, the choice of which to report among the many that were tried — create bias in the epidemiology literature. It is easy to create a study that is designed to get a particular result, especially if the desired result is to fail to observe a phenomenon (in the days before e-cigarettes, I was baffled that the anti-ST people never ran an intended-to-fail intervention to “show” that THR did not work).
It seems that the American Legacy Foundation has taken this one step further. In a comment on the ECLAT study, which found that many smokers who were forced to try e-cigarettes for a while (as study participants) decided to switch to them, they basically described the designed-to-fail methodology they would have used and criticized the honest researchers for not using it. Mike Siegel summed it up (emphasis in original):
According to the press release: “The researchers reported that e-cigarettes decreased some smokers’ cigarette consumption and that 8.7% quit smoking 40 weeks after the intervention ended. Unfortunately, they also found that smokers quit rates were not statistically different whether given e-cigarettes with or without nicotine –thereby causing a placebo effect. … We cannot conclude from this study that e-cigarettes promote cessation. While the study showed that some smokers quit, it does not show that the product itself had any role in the behavior change. In fact, the results merely show that sucking on an empty cigarette holder (a placebo) would likely accomplish the same thing.”
This press release misses the whole point. And in doing so, it ends up misleading the public.
There is no true “placebo” effect involved with electronic cigarettes because the mimicking of smoking with the use of a cigarette-like device is the main point of the product. We do not want research to control for this effect. We want research to measure this effect.
Obviously Legacy is wrong about a cigarette holder being the same experience as an e-cigarette. But they did figure out that if you want to design your study to show a null result — to minimize the apparent effect of e-cigarettes on smoking cessation — you should compare nicotine e-cigarettes to non-nicotine and claim that this is the contrast of interest. Based on that insight, they went back and pretended that this existing study had such a design flaw and reinterpreted the results accordingly.
Part of the problem is the entire notion of using clinical trials to study complicated learning- and socially-influenced processes like THR. You might be able to argue that testing medicalized nicotine products (the way NRTs are marketed) can be done reasonably in a clinic because they are used in a very clinical way. But that is not true of most ways of quitting smoking. In fairness to the ECLAT authors, that methodology has some advantages, and they were not actually doing a standard cessation trial. But the RCT fetish that is common among medics who only half-understand scientific research makes it very easy to design a study to fail and claim that it must be a good study because it is an RCT.
RCTs usually have net advantages compared to observational studies when (a) the assigned protocol is a realistic version of what someone would experience in real life and (b) the mere act of having people in a clinical setting and assigning them something does not affect the outcome. This makes them nice for examining medical procedures or treatment drugs, where these conditions are pretty much met. But they are quite bad for studying behavioral phenomena, especially those where, in real life, people fiddle with the details of the methods and act on their own without the artificial pressure of being in a study.
A further complication is what Siegel alluded to: RCTs tend to work better only if (c) it is obvious what to compare the intervention to. Comparing nicotine to non-nicotine e-cigarettes is not an interesting comparison. In any case, despite the rhetoric you hear about placebos, most proper RCTs do not compare a treatment of interest to a placebo, but to the realistic alternative. To see if a new method for performing an appendectomy produces fewer complications, you do not assign half of the subjects to the placebo of being anesthetized but their appendix left in. That would be insane. You compare the new method to the best available old method.
This further emphasizes the importance of point (b). Who do you compare the group assigned to use e-cigarettes to? Should they be given a placebo treatment of just being handed a quit smoking pamphlet that is known to have no effect? If so, you are still looking at people who agreed to participate in the trial (not representative of the population) and are comparing people who were asked to take a major step to those who just throw away a piece of paper and forget the whole thing. To merely control for the entire Hawthorne effect (the effect of feeling like you are being studied) the alternative may need to be more aggressive than that. To control for any placebo effect it would be necessary to give people pills that are inert but described as being a satisfying substitute for smoking (not a “cure” for it), because everyone knows that e-cigarettes are about substitution. That fiction is unlikely to hold up very long.
Basically, the more thought you give to trying to do the science right, the more clear it becomes that there is no particularly good way to do the RCT. Thus, the advantages of observational research over RCTs start to predominate.
As an aside for those who click through and read that Siegel post: You will notice that the thesis of the post is about Legacy failing to disclose that they receive funding from the pharmaceutical industry, which stands to lose sales as a result of e-cigarettes. I have to say that it seems like rather a stretch to demand that a large corporation disclose their relatively modest pharma funding on everything they write. It is kind of like asking FDA to do the same. (Perhaps the more relevant disclosure would be that Legacy was created and funded by a sales tax on cigarettes, the MSA.)
The impact of pharma funding on the anti-THR attitudes of Legacy and other pseudo-health corporations is somewhere between zero and trivial. Part of the reason is that there are much stronger self-interested motives for being anti-THR. More important still, people do not adopt these semi-religious beliefs because of funding. Many gravitate to where there is funding that supports the mission they have adopted, but that is causation in the other direction. Finally, the amount of money at stake is trivial to the pharmaceutical industry. They give grants to keep a hand in things and get inside information, certainly. But it is very difficult to believe that they are so concerned about relatively modest erosion of the tiny tiny corner of their business that is smoking cessation that they would exert pressure on those they fund to attack e-cigarettes.
Recall that yesterday I pointed out that holding an unrealistic view of your enemies’ motives is a recipe for adopting bad tactics. While this case is not quite as dramatic as the one I was discussing, it is another example. It is a mistake to think that pharma cares so much about THR that they are throwing around bribes to try to discourage it (even if you are willing to assume they would be willing to take such actions), and also a mistake to think that those funds play a major role in the decisions of anti-THR actors. It is probably safe to say that the impact is not exactly zero, but there are much more important forces afoot. If we focus on the red herring of donations rather than the major social forces and other base interests, we are likely to be rather less effective.