The unfortunate case of the Cochrane Review of vaping-based smoking cessation trials

As many of you are aware, there was a recent major update to the old Cochrane Review of smoking cessation intervention studies (trials) that gave some or all participants e-cigarettes. This report is an unfortunate turn of events. I foresee yet another highly publicized vaping “success” statistic that so hugely underestimates the benefits of vaping that it is really a perfect anti-vaping talking point.

For those not familiar, Cochrane reviews are complicated-seeming simplistic analyses where a bunch of study results are averaged together, using a technique that is typically called “meta-analysis” though is properly described as “synthetic meta-analysis” (as in, synthesizing the results; it is not the only kind of meta-analysis). For those not familiar with that methodology, it is basically junk science if a set of fairly strong conditions is not met, conditions which are far from being met in the present case.

For more on that general observation, see this previous post. I am not going to go into that level of detail again here, but I will summarize. First, just because you can declare a bunch of numbers to be part of the same category and average them together doesn’t mean it makes any sense to do so. A bunch of studies with different interventions, different populations, and other different methods cannot be treated as if they were just one big study of a single phenomenon, even if they can all be described with the same imprecise phrase, like “studies of whether vaping helped people quit smoking.” The analogy I thought of while working on this was asking “what is the average mass of house pets?” Yes, “pets” is a category you can create, and average mass is something you can calculate. But why would you want to know that average? It is a meaningless amalgamation of several clearly different collections of observations. Why would you want to know the average smoking abstinence rate, at a given future moment, of people who were handed some e-cigarettes, with some degree of flexibility in their choice, with some level of information and assistance, for some people, at some place and time over the last ten years. Yes, you can calculate that number, but why would you?

Well, you can calculate it in theory, but in reality you are stuck doing something that is weak proxy for it. The Cochranoids only pretend to be calculating that number because, of course, measures of all those different combinations of “some” do not exist. Instead what they have is whatever nonsystematic combination of “some”s that someone decided to study and write down in a journal article. It is like trying to assess the average mass of pets by looking at the records of one veterinary practice. Do they specialize in dogs or cats? Whichever types of animals they happen to see is going to be what you measure, not the population average. What’s worse, there is no attempt in Cochrane or the typical synthetic meta-analysis to figure out a population representative weighting (not that you could even do it in this case, but they never even try). By this I mean that you could bring in an estimate of the relative number of dogs and cats in a population and use that to weight (no pun intended) the average of the data you have for dog and cat averages to get a reasonable estimate for the average of the set of all {dogs, cats}. But no, the Cochrane methodology just weights the average by however many observations happened to be in the studies (analogy: averaging cats and dogs based on how many were in the vet practice’s database, even if they see ten dogs for every one cat).

As I noted, this correction does not work for smoking cessation studies (since they do not represent any real-world practice at all, so there is no real-world weighting to use), but it is still a problem. If the collection of studies included one huge study that used a particularly ineffective vaping intervention, it would drag the average way down. If that same study instead had a low sample size, the estimated average would go up. Just think about it. Can a method that has this property possibly be considered valid science? Consider the analogy again: If the vet practice also sees twenty horses, the average mass shoots up. If it sees only one, the average is pulled up, but not that much.

But even worse, the most common (by head count) pets never visit a vet. The modal pet category, in terms of individuals, is caged critters like fish, rodents, and the occasional lizards and hermit crabs. So the vet practice selection methodology is not representative of “pets”, the originally defined category. The analogy is that the Cochrane paper purports to be looking at the effects of vaping-based interventions on smokers, but really it is only looking at the effect of (a few particular) interventions on people who volunteer for smoking cessation trials. Yes, you can redefine the categories to be “pets who see vets” or “people who volunteer for smoking cessation trials”, but altering your scientific question to better fit your data is another pretty good sign that you are doing junk science. Although that is far better than pretending you are pursuing the original question, while actually analyzing data in a way that could only answer the redefined question, which is what usually happens (including in this case).

And then there is the related problem that clinical interventions are not how most smokers are introduced to vaping. So this was never about measuring the effect of vaping on smoking cessation, but the effect of being told to try vaping in a clinical setting on smoking cessation. Those are very different concepts, but the results are interpreted as if they are the former when they are obviously the latter. If you wanted to assess how much vaping reduces smoking in the actual real world, rather than in barely-existent clinical interventions, you would use an entirely different body of evidence. These trials do approximately nothing to help answer that question.

It turns out that a systematic review of vaping-based smoking cessation trials could legitimately help answer some interesting and useful questions. It is just that this paper does not do that. Most notably: What characteristics of an intervention seems to cause the highest rate of smoking abstinence — which e-cigarettes, what advice and support, which types of people, and whatever else the methods reporting from the studies lets you figure out? With that information, you could design better vaping-based clinical interventions (which are not unheard of, though the are too rare to really affect the question of how much vaping reduces smoking). You could also add useful assessments like what future trials should do for best practices (based on current knowledge) and what characteristics they should test to see what seems to work better.

This potential value of the review only serves to reinforce the fundamental failing of what was done. Why, oh why, would you want to take the success rates from the better-practice interventions and average them together with the rates from other interventions? And weight the result based on how many people happen to have been studied using the various methods? And then report that number as if it meant something? My mind just boggles that anyone ever would think this is a useful question to ask.

So I trust we have established that the number they reported is meaningless junk, even for what it purports to be. By the way, that number is a four percentage point increase in successful medium-term smoking abstinence, compared to null or near-null interventions. I buried this because mentioning a scalar in a headline or early in a piece tends to cause the reader to fixate on that number and consider it the main takeaway. It is not. It is meaningless. I urge you to never repeat it.

The reason I mention it at all is to comment on how low it is. If this were really the measure of the smoking cessation benefits of vaping, it would not make a very good case for vaping. Yes, you can spin it as “the prestigious definitive Cochrane Review [cough cough cough] finds vaping is better for smoking cessation than ‘officially recommended’ methods like NRT.” But the magnitude of “better” is so low that it is easy for someone to convincingly make the case that it is not good enough to justify the scourge of teen vaping, or whatever. Or that it is so low that we can just develop some improvement to the ‘officially recommended’ methods that would be even better.

So far, I have only hinted at the main reasons why that number is not a valid measure of how much smoking cessation is caused by vaping. Even if that statistic were a valid measure of what it could measure — “what happens if you use clinical methods to encourage vaping for smokers who are seeking aid to quit” — and not just some bizarrely weighted average of a random collection of often terrible ways of going about that, it would still be a huge underestimate.

There are three main pathways via which vaping causes less smoking: 1) For some people who are actively attempting smoking cessation, it increases their chance of success. 2) For some people who would not otherwise be attempting cessation, it inspires them to try or just do it. 3) It displaces some smoking initiation, replacing it with vaping instead. I am highly cognizant of the failure to understand this distinction because a colleague and I recently finished a review of those “population model” papers about the effects of vaping on future smoking (which hopefully will see the light of day soon). We discovered that almost every one of those papers just ignored 2) and only looked at 1) as a measure of how much cessation would increase. Some of them did this rather overtly (though they never admitted — or apparently even realized — they were accidentally making this assumption), while for others it was implicit. (Some, but not all, also considered 3) separately, but that is not immediately relevant.)

People described by 2) include “accidental quitters” as well as people who decide vaping is tempting and decide to try to quit smoking (switch) because of that. It seems safe to make the educated guess (for that is all we can really do with the data we have) that this has greater total effect than 1). In addition to creating new cessation attempts (which those “population model” papers mostly assume do not happen), vaping gets “full credit” for any resulting cessation, not just credit for the increase in the success rate (another error in the population model papers). That is, even if someone would have quit smoking even without vaping had they given it a try — and thus the the fact that switching to vaping is a particularly effective way to quit did not even matter — that case of cessation was still caused by vaping.

Like those problematic population models, the Cochrane approach only looks at 1). Everyone studied is doing something to attempt to quit smoking, or at least is going through the motions and signed up for some guided quitting attempt. So half or more of the cessation effect of vaping is being assumed away.

And it gets worse still, for both the whole meta-analysis method and this particular exercise. Behavior is not biology. The Cochrane method is sometimes valid if what is being studied is a biological effect (see the above link for more details of other conditions that must be met), but is hopeless for assessing personal behavior. Why? Because people know themselves and make choices, of course. So in a population (place, time) where vaping is reasonably well known, a smoker who finds it an appealing option is likely to try it, and if she was correct that it was indeed just what she needed, then she is going to quit smoking. She is a category 2) success story, or perhaps category 1) if she was already dedicated to quitting. And then what happens? She doesn’t volunteer for a smoking cessation trial!

That is, the people who are accurately self-aware that they are a particularly good candidate for quitting via vaping just do it, and so do not contribute to the study-measured success rate. It is like the fish in the “average mass of pets” analogy — they never show up to the vet to get weighed into the average. This cuts both ways, of course: Anyone who is self-aware that they just need some nicotine gum also quits and is not in the study to give due credit to nicotine gum. The difference, of course, is that basically no one accurately thinks that.

We also know that people who pursue formal smoking cessation interventions are more likely to be “unable” to quit on their own, which could bias the results either direction. I.e., it could be that people who just decide to quit are more likely to be helped by vaping, as compared to someone seeking aid, because their baseline success rate is higher and vaping multiplies that. Or vaping might matter less for them because they would be successful even without vaping. But it is almost certainly not exactly the same.

In fairness to the Cochranoids, it was not their task to review category 2) quitters, or selection bias, or other evidence. It was not their job to provide useful information. Their job is to just mechanically average whatever numbers someone hands them. However, that is being rather too fair to them, since they pretended they were measuring something useful. They conclude “There is moderate‐certainty evidence that [vapes] with nicotine increase quit rates….” This claims implies that they are measuring how much quitting is caused by vaping, full stop, not merely how much more likely clinical study volunteers are to be abstinent if they are in the vaping trial arm.

To summarize: If clinical assignment to try vaping really only increases successful smoking cessation (or, more precisely, medium-term abstinence) by four percentage points, it is really not very impressive. But we are pretty sure that is not the case because it is based on a population where many of those most likely to switch have already exited, and it is based on randomly averaging together best practices and poorly designed interventions. Moreover, even if it were right this would only be one of the many pathways from vaping to smoking abstinence, and one of the least important, so who cares that it is low?

On the bright side, most of the headlines and pull quotes I have seen about this fake science say something like “vaping shown to be better for quitting than NRT” or “new study shows vaping helps people quit smoking.” While these stories seem to be all written by someone without a clue about the Cochrane Report, this is a case where three clueless wrongs make a right: At least those vague unquantified messages are correct. (The third wrong is that people have been indoctrinated into thinking that NRT has measurable benefits, so they interpret “better than NRT” as meaning “good” when it really only means “not quite zero”.)

The problem is that after a spate of “this just in!” headlines this month, which will affect almost no one’s beliefs, we can look forward to a few years of this paper being cited as evidence that vaping has a trivial effect on reducing smoking. The four percent number will be successfully portrayed as definitive and the entirety of the effect of vaping on smoking prevalence. And everyone who is currently suggesting that it is not total junk, because they like the headlines of the day, is helping make that happen.

Sunday Science Lesson: Smoking protects against COVID-19, but most of the related “science” is badly misguided

by Carl V Phillips

I am skipping the Introduction section here. Which is to say, I assume that the reader is at least somewhat familiar with the the overwhelming evidence that people who smoke are much less likely to have bad COVID-19 outcomes. It turns out that this phenomenon and the (often misguided) chatter around provides a great case study for some general science lessons. Here are a few of those:

1. Small nonsystematic collections of observations < systematic observations / experiments < large, somewhat systematic, reasonably comprehensive collections of observations.

To unpack that, anyone who follows pop discussions of science has learned that systematic focused studies, of whatever sort, offer better information than happenstance data collection. That is basically true when they are on the same scale. So if we have a case series of hospitalized COVID-19 patients, than happens to have smoking data, it is not as good as a systematic study that focused on smoking status and COVID outcomes. This is true for various reasons — e.g., the smoking data might not have been accurate because it was not anyone’s real focus. The comparison for those studies in this case are population averages (i.e., the smoking prevalence among the patients is observed to be lower than for average people in that country) which do not offer a perfect comparison. A study that focused on smoking status, and tried hard to collect that information correctly and figure out what baseline to compare it to is a lot more reliable.

But in this case, any limitations of the individual studies is made up for on sheer volume. We have a zillion of the former imperfect-but-easy comparisons, and a decent handful of the latter. And almost all of them support the claim that smoking is protective. It really is approximately a zillion. See this thread by @phil_w888 on Twitter, in which he has collected, at the time of this writing, 762 published reports that inform the question of whether smokers have lower rates of COVID-19. Almost all the results point in the same direction; the rare exceptions are what we would expect from normal study error.

This is absolutely overwhelming evidence. Something would have to be systematically misleading about hundreds of different observations, that use different methods in different populations, and have many variations in exposure and outcome measurement. Not to say that is impossible. If we had 762 studies comparing height to breast cancer risk, they might all show that being taller is strongly protective (because men). But it is hard to imagine such an failure to recognize an important variable. No one has proposed a plausible explanation other than causation.

Anyone who says, “we need to do study X to see if this is really true” is just chasing grant money. Anyone who says “this new study shows it really true” is apparently not familiar with the hundreds of previous reports. The statement “this new result should convince the deniers” is wrong; it might be what gets someone’s attention for the first time, but that is different. Anyone familiar with the data who did not believe this was real clear back in April, let alone now, is unlikely to be swayed by any evidence.

The evidence is reasonably systematic, by which I mean that it was not apparently cherrypicked in any way to try to “show” something that is not true. It is not limited to specific and possibly odd populations. It is consistent across methodologies, giving us confidence that it is not an artifact of study design.

If you are seeing a parallel to the question “does vaping cause people to quit smoking”, you are spot-on. We do not know that to be true because of some contrived ultra-systematic little study. All of those are trumped by the much broader knowledge.

2. “Meta-analysis” is usually junk science, and this is a great example of that.

All those “meta-analyses” of the smoking-COVID results you see floating around are complete junk. By junk science I do not just generically mean “bad science” but rather “a methodology that even if done as well and honestly as possible produces meaningless results”. As I have previously explained at length (e.g., here), the meta-analysis method of averaging together a bunch of study results (which is not the only form of meta-analysis but is what “meta-analysis” always means when used by people who are not expert in methodology) is only valid if you can legitimately imagine that all the studies are really slices of a single large study (rows of data) that were separated into different datasets for some reason. If that were the case, it would make sense to put them back together.

This is sometimes(!) kinda(!) the case for clinical treatment experiments where the treatments are mostly(!) the same, the outcome measures are reasonably(!) consistent, and people are mostly(!) functioning just as biological machines and thus are fairly interchangeable. Notice all the emphatics in that sentence, though. Even for this best-case scenario, there are departures from the implicit “the data was separated at birth” assumption. As soon as we depart from that simple case, the averaging becomes absurd.

Consider an example most of you are familiar with, clinical intervention trials where people who smoke are persuaded to try to switch to vaping. This is a collection of very different interventions (though they can all sloppily described as “give vapes to smokers”) in different populations — different at both the macro level (the larger population in space and time that the study is drawing from) and the micro level (exactly which unusual group from among that population volunteered to be part of the study). Even the relatively simple outcome measures vary across datasets. So averaging the results together is utter nonsense. What is it the average of? The answer is not even something vague and barely-meaningful like, “the average result when you try the many different possible methods across many different peoples and times” because it is not even that. It is the average of results from the particular collection of methods and people that people happened to write down, which is unlikely to represent the full range of options. Moreover — if you want to put the icing on the cake of this absurdity — the average is weighted by however big the particular individual study happened to be. So if there was one study of Estonians who were given a cigalike in 2017, but it was only 20 subjects, then its result will barely affect the average, but if they had happened to enroll 1000 subjects in the exactly same study with the exact same result, it would have a large effect on the average. Just think about that. Anyone who thinks all this is legitimate scientific methodology, well, I have some bad news for you about astrology also.

The data on smoking and COVID makes the smoking cessation trials look like medical treatment trials, in terms of heterogeneity and representativeness. As noted in the previous point, one of the strengths of the data supporting the smoking-COVID conclusion is that it is extremely heterogeneous. Even the collections that are the “same” are very different. For example, even if you limit your collection to case series of COVID patients from Chinese hospitals, it is still a collection of whatever people happened to be in these particular hospitals on particular days (and they differ at the macro and micro levels even though they are more similar than “all the people in the world”), with very different outcome endpoints (each study was about some particular outcome of interest to one group of researchers), different exposure measures, and different data collection methods. It is absurd to average these together. Needless to say averaging these and whatever data happens to appear in French and American national statistics, and some case studies in Germany, and so on, is several steps more absurd.

Moreover, even if the methodology did not deliver an absurd weighted average of whatever happens to have been reported, why would you want to know the average? Imagine that we had data for every COVID hospitalization for each country in the world, rather than just a random nonsystematic subset of those, and could average them all together. Why would we want to? If the apparent protective effect were greater in China than in France (this seems to be the case), that is more useful information than the average of them. The same is true if the association varies by outcome measures or whatever. Taking the average discards useful information and produces nothing of value.

3. “There is a plausible mechanism” rhetoric is approximately worthless.

Being able to come up with some plausible story for why an association in the data represents causation is not an impressive feat. It can pretty much always be done. This is particularly true when the story is about biochemistry, where human knowledge is still so primitive and where few people have any real intuition for it (unlike with behavioral stories). It is informative when someone proposes the story and then specifically tests it (i.e., “under this story, we would expect to see X and not expect to see Y, and so we looked to see if X and Y….”). But an ad hoc story to explain an existing observation is entirely different.

If data came out that wearing cloth masks does a better job of reducing SARS-CoV-2 spread than paper masks, it would be accompanied by a collection of just-so stories about what mechanism is causing it. If the data said paper worked better than cloth, it too would be accompanied by mechanistic stories. Sitting there by themselves, either set of stories would be plausible and indeed compelling because it was the only thing being presented. People would be saying, “yeah, because of [mechanistic story], we would expect this difference”. Whichever way the difference went, the commentary would be all about why this should be expected to be the case.

I suppose that if no one can come up with any story for how a particular association is causal, that would be a strike against the claim of causation. Though sometimes even when the first reaction is “nah, no way this is real”, it turns out that it is and no one understood the story, so this is far from definitive. Perhaps if data showed, say, that smoking weed on Tuesdays is strongly associated with diminished productivity, but smoking on Wednesdays is not, the right assessment would be “we cannot figure out any story that would make this a real causal difference, so we conclude it is a meaningless artifact of our data.” But I’ll bet that half of you are already coming up with stories for why that contrast might really be causal, so you just proved my point.

So when you hear a story about what is causing a particular observed pattern in the data, keep in mind that it is always easy to make up such a story. Of course, if someone performs a proper focused test to see whether that story, rather than some alternative, that is good science. That is the essence of science. But don’t expect to see it in public health research, where they will just make up a story and declare it to be true without ever testing it.

3a. There is little reason to believe that the protective factor is nicotine.

This is the specific implication of the previous point. You may have seen the claims that nicotine seems to be what makes smoking protective because “…blah blah…ACE enzymes…blah blah…lots of other words that you think must be true because they sound all sciencey”. While these stories are plausible, the existence of the stories is not informative. Again, plausible stories are always possible.

Smoking is a complex exposure that has a lot of effects on people’s biology and behavior. SARS-CoV-2 transmission is complicated and we do not fully understand it, and COVID-19 severity is even more complicated and we barely understand it at all. There are countless possible causal pathways there, only some of which are about the nicotine. Just as people with little knowledge of tobacco use think of nicotine as being the harmful aspect of smoking, people who are immersed in vaping and NRT politics tend to think of nicotine as the beneficial aspect of smoking. They have an anti-scientific prejudice (like most people do about most things, but it is less forgivable in this context) and so make up a story to make the data fit their assumptions.

The scientific approach would be to withhold judgment until we have some data that resolves the question. The easiest and most obvious observation would be whether exclusive vapers or long-term NRT users have the same protective association that smokers do. Unfortunately, we will not stumble into that observation like we did with smoking because there are fewer of them and data collection about vaping/NRT status is of even lower quality than for smoking, and often not collected at all. Thus there needs to be a focused systematic study to answer this, and it does not seem to be happening.

It is worth adding that trials in which smokers are given nicotine patches when hospitalized for COVID-19 (which are being done, because there is always money for treatment research) are not helpful in answering this question. The biggest variable in that mix is whether or not patients are forced into the stress of nicotine withdrawal. (It would certainly be very useful to know if this is killing people, but it does not address the question of why smoking is protective.) Indeed, even giving non-smoking patients nicotine patches would not address the question very effectively because (a) effects of an ongoing consumption choice do not necessarily start the first week of consumption and (b) the protective effect of smoking occurs before someone becomes a patient, so any effect at this stage might be an entirely different phenomenon.

3b. Where along a causal pathway the protection occurs is also unclear

This is another specific point relating to story-telling. Almost all of the available comparisons are based on clinically-significant (usually hospitalized) COVID cases. The deficit of smokers in that group could mean that smoking prevents colonization with SARS-CoV-2, or that it prevents the colonization from causing into COVID-19 at all, or that it prevents cases of the disease from getting severe. The effect is frequently described in terms of preventing colonization, which might be true (and would be bad news for smokers, given that the protection is presumably not perfect, so smokers remain susceptible to eventual infection), but we do not know. The evidence cited in favor of that — comparisons of smoking rates in all test-diagnosed cases (not just hospitalized cases) show a deficit of smokers — does not really show it. If smoking only prevented significant disease after infection, we would still see this because (in most populations, so far) a large portion of tests are among people with disease symptoms.

It turns out epidemiology requires a bit of thinking about what the data means. Go figure.

4. Correlation is not causation, but it is the best possible evidence of causation that exists.

Causation can never be observed. Never. Never ever ever. All we can do is infer it from observed associations. If you have those, whatever form they take, you can infer causation. Or perhaps not — because perhaps there is a good affirmative reason to doubt that the association is causal. If so, it is useful to focus on testing that rather than either doing more of the same or, worse, saying “there is a reason for doubt and therefore I am just going to doubt”. This does not include running any old clinical trial whose only “virtue” (*cough*) is being a clinical trial. There are no simple recipes for inferring causation. Anyone who suggests otherwise is too simple to be doing science.

 

Sunday Science Lesson: How are deaths counted (for pandemics, smoking, etc.)?

by Carl V Phillips

There has always been a lot of confusion about what counts as a death from smoking — or from the current pandemic, a war, or most anything else. Events of 2020 has caused a lot more people to realize they are confused about what it means. Typically people just recite numbers they hear without pausing to ask what they could possibly mean. Deaths-from-smoking statistics are recited like a factoid one might hear about particle physics. In both cases a moment’s thought would reveal to most people that they really have no idea what it even means. But unlike with a lot of physics, it is possible for most anyone to understand what the death counts mean and how they are properly estimated. Continue reading

Can smoking protect you against COVID-19?

by Carl V Phillips

Many of you will have already seen or heard about a paper by Farsalinos et al., in which they review some case series data from China and observe that for hospitalized COVID-19 patients, the recorded smoking prevalence is far lower than would be expected given the population prevalence. The US CDC also released data a couple of days ago that shows the same pattern. If the data is representative and accurate (but note that there are compelling reasons to question whether either of those is true), this strongly suggests that smoking is hugely protective against COVID-19 inflection and/or the resulting disease progressing to the point that hospitalization is required.

We are not talking at the level of “well I guess smokers get a bit of compensation this year for all the health costs of smoking.” This is at the level of “everyone should take up smoking for a few months until the pandemic abates.” The protective effect implied by the data is absolutely huge. Continue reading

New Glover-Phillips paper: “Potential effects of using non-combustible tobacco and nicotine products during pregnancy: a systematic review”

by Carl V Phillips

This new paper, by Marewa Glover and me, is just out in Harm Reduction Journal. In it, we review the available epidemiology evidence about the effects of nicotine-sans-smoke (NRT, snus, vape) on pregnancy outcomes. It was a bit of a challenge to get it published because we wrote the paper we needed to, rather than a “typical review”. As you might know, the journal publication process is rather …well, let’s just say conservative.

(I should note that we finalized the review before this new contribution to the literature by Igor Burstyn et. co., “Smoking and use of electronic cigarettes (vaping) in relation to preterm birth and small-for-gestational-age in a 2016 U.S. national sample” (note that this link bypasses the paywall, but does not seem to work in all browser configurations). Igor’s paper is higher quality than anything we reviewed.)

A typical review of epidemiology looks at the results that are reported in journal articles and then just naively believes them, suggesting that What We Know consists of a vague summary of whatever results the previous authors chose to publish. Or even worse — so much much worse — suggesting that a calculated average of those results is the best estimate. That is never a legitimate assessment of existing knowledge, and less so in our case. Continue reading

“Dependence” and the danger of adopting the language of your oppressors

by Carl V Phillips

Vaping, smoking, and other tobacco product use are routinely described as “addictive”. As I have pointed out repeatedly, this is a very misleading characterization. (You might recall my major essay – years in the making – on this topic from earlier this year. If you missed it and are reading this, you will definitely want to go read it.) The two sentence summary of the headline point is: All ‘official’ definitions of “addiction” hinge on the behavior being highly disruptive to someone’s functioning – work, social, etc. But tobacco product use has no such effects, at least not for more than a minuscule fraction of consumers.

So the fallback position, in the event that someone recognizes the problem with that word, is to say that tobacco product use produces dependence. But this is barely more accurate and is equally misleading. For those who use these products or advocate for their acceptance, to use of either of these words is to make the rhetorically and psychologically dangerous mistake of adopting the language of one’s oppressors. Continue reading

The folly of federalism for vaping (etc.) policy

by Carl V Phillips

This post goes a little more into non-ethics political science than I normally do.

Federalism is frequently a very good way to make government decisions. Federalism, of course, refers to having some government decisions made at a more local level, rather than being made at the highest aggregation of government. The devolution of decisions to a local level allows for consideration of local differences in situations or preferences. It makes no sense to create national rules about parking or building zoning. Offering genuine flexibility at a lower aggregation of decision-making avoids the tremendous failure of a Soviet-style system (which, contrary to a great deal of commentary, is all about the hopelessness of central planning, and has little or nothing to do with “socialism”). If a smaller unit of decision making — be it an individual consumer or business, or a more local government — is capable of making a decision, they should be allowed to do it. Continue reading

Fixed it for you – a Science Lesson based on a anti-vaping junk newspaper article

by Carl V Phillips

I was asked to write something about the “research result” that was the germ of this Daily Mail article. I realized that I could turn the whole article into a science lesson about not only the particular result, but about the general flaws in this field. So here it is, in the form of a “fixed it for you” rewrite of all of it.

To save you a bunch of jumping back and forth, I will quote each bit of the original before rewriting it. Is that pushing the boundaries of “fair use” for criticism purposes? Perhaps. But it is the f—ing Daily Mail, so I am not going to think too hard about it.

Vaping could put you at the same risk of getting heart disease as smoking cigarettes, research suggests.

Public Health England claims e-cigarettes are ’95 per cent safer than traditional tobacco’ and encourages smokers to make the switch.

But researchers have found the devices may trigger changes in cholesterol linked to killer heart disease, similar to cigarettes.

Vaping also stifled the heart’s ability to pump blood around the body just as much, if not more, than traditional forms of tobacco.

A new paper claims that for one particular effect of smoking — one of the many ways in which smoking causes heart disease, and not even close to one of the biggest ones — vaping may cause similar levels of effect.

Public Health England has tricked people into believing the best-case-scenario for vaping is that it causes 5% of the harm from smoking. This is an absurd claim, a made-up number that is based on nothing, and which is far higher than any reasonable estimate of the risk. But because even defenders of vaping are tricked into endorsing it, it makes a great starting point for those who want to print alarmist claims that “vaping is really much worse than that!”

Research has shown smoking cigarettes increases heart rate, tightens major arteries and can cause an irregular heart rhythm – all of which make your heart work harder.

The killer habit also raises blood pressure, which increases the risk of a stroke and a heart attack.

Smoking causes acute changes in the circulatory system, which can trigger cardiovascular events. Smoking also does enormous tissue damage to the heart and blood vessels and replaces oxygen in the blood with carbon monoxide, which are the reasons it causes a lot of cardiovascular disease.

Stimulant chemicals (e.g., caffeine, nicotine, antihistamines) cause acute changes similar to the minor effects of smoking, as does physical activity (e.g., going to the gym, having sex). Even many everyday activities have this effect (e.g., taking a hot shower, walking up the stairs). All of these exposures increase the very-short-run rate of having a stroke or heart attack. But since this is a very minor pathway from smoking to these outcomes, the risk is trivial compared to the total risk from smoking.

In addition, these acute outcomes are generally believed to be “harvesting effects” — that is, they trigger what were already imminent events. Someone who has a heart attack from walking up a flight of stairs probably would have had that heart attack later the same week if he had avoided stairs. Someone who is struck down by a trip to the gym or having sex probably would have suffered that outcome within a month or a year.

Scientists are unsure why e-cigarettes cause similar changes in heart health, even though they contain fewer harmful chemicals than standard cigarettes.

Scientists — at least those who understand these simple facts — would expect a nicotine dose from vaping or other smoke-free products to have these same very-short-term effects. They would be detectable using biomarkers (clinical tests) and would cause the occasional harvesting event, though at a rate so low it would be almost impossible to detect. These are a result of the mild stimulant itself, not the toxicant damage from breathing smoke.

E-cigarettes allow users to inhale nicotine in vapour form, rather than breathing in smoke from cigarettes which burn tobacco and produce tar.

But scientists are now advising users wean off e-cigarettes because of the ‘lack of information on long-term safety’ and a ‘growing body of data on their negative effects’.

Vaping is much better for you than smoking. It is not even clear it is bad for you at all, though there is reason to worry a little about anything that involves the lungs. Nevertheless, anti-tobacco activists — some of whom pose as scientists — have been as aggressive about discouraging vaping as they are about discouraging smoking. Indeed, they have become even more aggressive lately, after they belatedly decided that vaping poses a greater threat to their real mission, anti-tobacco extremism.

Our best estimates of the (minimal) health risks have not changed importantly since about 2007, so this trend is purely political and does not reflect any change in what we know. The trend has, however, produced a huge uptick in attempts to concoct “scientific” rationalizations for the political goals.

Researchers from Boston University analysed 476 participants aged between 21 and 45 with no previous heart issues. Of them, 94 were non-smokers, 45 e-cigarette users, 52 people who used both e-cigarettes and traditional tobacco and 285 cigarette smokers.

The team found that bad cholesterol, known as LDL, was higher in sole e-cigarette users compared to non-smokers.

When you have more LDL than your body needs, it can cause plaque to build up in your arteries. This thick, hard plaque can clog your arteries like a blocked pipe.

Reduced blood flow can lead to a stroke or heart attack. If a clot completely blocks an artery feeding your heart, you can have a heart attack.

For example, a recent study out of Boston University found that vapers had a higher level of low-density lipoprotein cholesterol (LDL, the bad kind of serum cholesterol). The authors could have just noted that this was a curious finding that was probably caused by random error or study bias, given that there is no good reason to expect it is causal. This is especially true given that their study was underpowered and used sketchy methodology. The correct next step would have been for the authors to check what had been observed previously, in other studies, and perhaps to suggest it was worth investigating this possible curious relationship using better methodology.

But the incentive system for all public health researchers — and especially those researching exposures that are currently considered evil and are attracting a lot of funding — is to claim that they discovered something important. There is no penalty in public health science (such as it is) for making wild unsupported claims or for later being proven wrong. But there are ample rewards for saying what funders want to hear and for getting featured is clickbait news articles, no matter how unscientific their message is.

Lead author Sana Majid said: ‘Although primary care providers and patients may think that the use of e-cigarettes by cigarette smokers makes heart health sense, our study shows e-cigarette use is also related to differences in cholesterol levels.

It is especially easy to take advantage of the widespread confusion (which has been actively cultivated by both public health professionals and clickbait newspaper writers) of not recognizing the difference between a measurable change in a biomarker and a substantial risk, and even more so the confusion about overall risk versus a single causal pathway. This observation requires some unpacking:

A small change in one particular biomarker of risk, say LDL, sometimes represents an increase in someone’s risk. (This sets aside whether the associated exposure really caused it, or if it was just study error.) However, that “sometimes” reflects the fact that a biomarker can be increased via a pathway that does not increase risk. Having the higher level of that biomarker is a predictor of risk, on average, but raising the level in a particular way does not increase risk. For example, having a high body mass index (BMI) is a predictor of various bad outcomes. But the increased risk reflects the pathway to high BMI via a lack of exercise, and to some extent massive overeating. The body mass itself is a small part of the risk. If someone has a high BMI but exercises a lot, the extra risk is minimal. If someone acquires a high BMI by exercising a lot (bulking up at the gym), the increasing BMI is tracking a reduction in health risk. Thus, a particular exposure causing a particular biomarker change (again, assuming it even is causal) does not necessarily mean it has the same average effect as someone having that different biomarker level.

In addition, even if the increase in the LDL biomarker is causal and it represents the average additional risk from that higher level of LDL, it is still a trivial risk. Typically something like this would be something like a 0.5% increase in heart attack risk. That hypothetical effect is not nothing, of course. But it would merely mean in this case — pretending that 0.5% is a valid estimate — that vaping causes about 0.25% of the risk from smoking (so still small compared to Public Health England’s clever ruse).

Readers exposed to a biomarker statistic in isolation are very rarely told just how small an absolute risk it might represent. This is another trick related to causal pathways that is used to confuse readers. As already noted, smoking causes cardiovascular disease via damaging tissue and pumping in carbon monoxide, as well as via changing blood chemistry, blood pressure, and cardiac rhythm. The latter entries on that list are minor contributors to disease, as far as we can tell. So a discovery (pretend it is really a discovery) that “vaping is just as bad as smoking in terms of this one isolated measure of heart disease risk” is of little consequence in terms of overall risk. It is just one very minor pathway out of the many pathways via which smoking causes cardiovascular disease. But the average reader only understands the “just as bad as smoking” and “heart disease risk” bits of that. The activist “researchers” and clickbait publishers count on that and cultivate it.

‘The best option is to use FDA-approved methods to aid in smoking cessation, along with behavioural counselling.’

They then, with no apparent sense of irony, feel the need to recommend pharmaceutical nicotine products which have approximately the same effects as the nicotine from vaping.

However, the team’s research did not look at whether vape users had previously smoked cigarettes.

The high cholesterol levels therefore may have been caused by damage done by previous traditional tobacco use.

Another favorite trick of public health “scientists” is the Study Limitations paragraph. Sometimes this throw-away paragraph, buried in the Discussion section, notes one of the legitimate weaknesses of the study, such as the failure to control for the confounding effect of previous smoking in the Boston University study. But mostly that paragraph is the equivalent of a stage magician’s misdirection tactics: Several aspects of the study that are less-than-ideal but not really serious problems are listed, while real fatal flaws are never mentioned.

What is worse, when a serious problem is acknowledged (e.g., apparent uncontrolled confounding that seems like it has a huge effect on the reported estimate), that is all is done: it is acknowledged. But legitimate scientific analysis does not offer some ritualized confessional process, wherein someone can sin and then just have its implications washed away by admitting to it. If the authors know such problems exist, they have no business reporting the estimate they derived as if it were a best estimate of causation (or even of association in many cases). They should figure out how to incorporate the resulting uncertainty into their results reporting, or not report those results at all. Yet in the results section, abstract, title, and press releases the authors declare they have found a particular precise relationship. Then in one of the last few paragraphs of the paper, they openly admit that their methods cannot actually support that conclusion even roughly, let alone at the precision they claimed. That is not ok.

A separate study, by the Cedars-Sinai Medical Center in Los Angeles, found vaping was worse for heart blood flow than cigarettes.

Researchers analysed 19 young adult smokers – aged between 24 and 32 – immediately before and after vaping or smoking a cigarette.

They examined the heart’s function using an ultrasound while participants were at rest and after performing a handgrip exercise to simulate physiologic stress.

In smokers who use traditional cigarettes, blood flow increased modestly after inhalation and then decreased with subsequent stress.

However, in smokers who used e-cigarettes, blood flow decreased after both inhalation at rest and after handgrip stress.

Given the random errors and biases in highly-specific studies, it is easy to produce alarmist reports based on biomarkers when that is the goal. The studies are cheap and easy to crank out. For example, “researchers” out of Cedars-Sinai Medical Center used ultrasound to look at the effects of vaping and smoking on a 19 young smokers. They got different blood flow results for smoking and vaping, and declared that this suggests something is worse about vaping as compared to smoking, in terms of health risk, even though there is no reason to conclude this.

Lead author Florian Rader, medical director of the Human Physiology Laboratory at the Cedars-Sinai Medical Center, said: ‘These results indicate that e-cig use is associated with persistent coronary vascular dysfunction at rest, even in the absence of physiologic stress.’

Co-author Susan Cheng, director of public health research, also at Cedars-Sinai Medical Center, added: ‘We were surprised by our observation of the heart’s blood flow being reduced at rest, even in the absence of stress, following inhalation from the e-cigarette.

The authors even admitted that this result was not in keeping with any hypothesis they had. But instead of admitting that with such a small sample size, it was thus probably meaningless random error, nor even offering the obvious alternative hypotheses (e.g., smokers who are not used to vaping might have a one-time response to the novel exposure that does not persist with continued use), they simply declared that this represents a health risk.

Unlike with the LDL example, in this case they do not even have a reason to believe that this biomarker difference is, on average, associated with greater disease risk. Does their reported result of a transitory decrease in blood flow after puffing a vape (in contrast with their reported increase in blood flow after puffing a cigarette) represent an increase in risk? They have no legitimate way to even guess. They could not have even given this context like “if real, this represents 0.25% of the cardiovascular risk caused by smoking”, even if they wanted to (which, of course, they did not). What they have is the merely the functional equivalent of when you smell some foul odor or see some problem with a food and quip “that can’t be good for you.”

‘Providers counseling patients on the use of nicotine products will want to consider the possibility that e-cigs may confer as much and potentially even more harm to users and especially patients at risk for vascular disease.’

Both studies are being presented at the American Heart Association (AHA) Scientific Session conference in Philadelphia this week.

In any case, even if every last one of these results (which get churned out so that people can use them as an excuse to go to conferences) were real, which is undoubtedly not the case, the best estimate of the total resulting risk would probably still fall short of Public Health England’s fictitious 5%. Any suggestion that all of these results taken together, let alone just a few of them, means that vaping is as harmful as smoking is just utterly absurd. Anyone who claims that is probably both incompetent to judge and dishonest.

Rose Marie Robertson, the AHA’s deputy chief science and medical officer, said: ‘There is no long-term safety data on e-cigarettes.

We have enough data from analytic chemistry plus toxicology, legitimate clinical studies that are based on reasonable hypotheses rather than being cheap fishing expeditions, and a lack of observed unexpected outcomes (e.g., there are no “popcorn lung” cases) to be confident that the risk from vaping is trivial. It would be good to have some longer-term observational epidemiology to confirm this. It would be better still to have the testimony of an omniscient god who just told us the answer. But lacking either of those, we should do what real scientists do: Make the best assessment we can based on the evidence we have. Real scientists — and normal people who naturally think like scientists more than most “public health scientists” do — never suggest that the lack of the Word of God, nor any other data they wish they had, means we should pretend utter ignorance.

‘However, there are decades of data for the safety of other nicotine replacement therapies.’

The AHA recommends people quit smoking using patches, inhalers and gum that ‘are FDA-approved and proven safe and effective’.

Much of what we know about the safety of vaping, and the lack of real health outcomes from the measurable acute biomarker effects of nicotine, come from what we know about other nicotine sources. Most importantly, any risks of real cardiovascular outcomes (or cancer, etc.) from Western-style smokeless tobacco use are clearly below the limits of what we can detect via the extensive epidemiology. We also have a tiny bit of evidence about the use of pharmaceutical nicotine, mostly the same exposure as vaping, which further supports this. (We have limited data about long-term use of pharmaceutical products because the industry tries to maintain the fiction that their products are used only short-term, even though they are mostly used long-term like any other tobacco product. Thus, funders discourage research on their long-term effects.)

It comes after 40 Americans have been killed by mysterious lung diseases linked to vaping across 24 US states.

In the interests of ending this article with a non-sequitur we would like to mention that people buy a lot of drugs on the street with unknown cocktails of active drugs and potentially harmful inactive ingredients, and are sometimes poisoned as a result. Since vaping is also drug use, it stands to reason that vaping must be bad. That’s just basic clickbait logic.