Even Norwegians do not understand how low-risk snus is

by Carl V Phillips

In honor of my launching my Patreon account a few hours ago…

[Inevitable plug: If you like my work and consider it valuable, please consider becoming a patron. There will also be some premium content for donors. Check it out here.]

…I thought I would write about one of the rare good and useful bits of new research in this space. It is “Relative Risk Perceptions between Snus and Cigarettes in a Snus-Prevalent Society—An Observational Study over a 16 Year Period” by Karl Erik Lund and Tord Finne Vedoy, available open-access (kudos!) here. In it they discover that despite Norwegian population becoming one of the small number of THR success stories, perceptions about the risk from snus (the leading low-risk substitute for smoking there) are still way off.

(This is a workaday research review. If you want something deep and epic, please check out the previous post. If you want something incendiary, please stay tuned [or scroll down to the Update].)

Continue reading

Smoking is not addictive

by Carl V Phillips

Now that I have your attention, this long essay is my response to the frequent requests to summarize my analyses of the concept of addiction, particularly how it relates to tobacco product use. I should note that the headline is based on the most commonly-accepted definition of “addictive”. I will work my way through that to other senses of the word under which smoking might be considered addictive. Continue reading

Tobacco control ratf**king

by Carl V Phillips

Sorry for the silence, though it may get worse. As some of you know, I lost my funding to keep working on THR issues, and I expect I have to fully move on to other work. But I thought I would pop in with some insight from the old guard. After that, I have an epic post that is almost finished (years in the making). These will be good notes to go out on.

For those not familiar, “ratfucking” is a term from the Nixon era that refers to political dirty tricks. The word has had a major resurgence in the age of Roger Stone, Stephen Miller, et al. [Fun fact: I was not sure of the spelling of Miller’s name, so I typed a search for “Trump advisor smug weasel” and his name appeared as the second entry after an article about presidential advisors in general.] The nuance of the word typically refers to sowing false information or allegations to harass and damage the opposition. The acts are usually barely legal (except perhaps insofar as they constitute a criminal conspiracy) or at least are impossible to prosecute, but they are a clear violation of norms of social behavior and other rules of conduct.

In the political arena, ratfucking includes such things as push-polls, doctored photographs, engaging in public bad acts while pretending to be a member of the opposition, and other methods of trying to exacerbate problems that are blamed on the opposition. As practiced by tobacco controllers, ratfucking includes, well, such things as push-polls, doctored photographs, engaging in public bad acts while pretending to be a member of the opposition, and other methods of trying to exacerbate problems that are blamed on the opposition. Continue reading

Peer review of: Charlotta Pisinger et al. (U Copenhagen public health), A conflict of interest is strongly associated with tobacco industry-favourable results, indicating no harm of e-cigarettes, Preventive Medicine 2018.

by Carl V Phillips

For an overview of this collection and an explanation of the format of this post, please see this brief footnote post.

The paper reviewed here is not available on Sci-Hub at the moment, or anywhere else (I will add a link if someone finds one). The abstract and portal to the paywalled version are here. And, yes, that series of words really is the garbled title the paper was published with.


This is one of the worst tobacco control papers of the year, and that is a high bar. It suffers from multiple layers of fatal flaws. On the upside, most anti-THR junk science is pretty unremarkable — the usual problems of using flawed methodology, erroneously assuming associations represent causation in a particular direction, statistical games, model fishing, etc. Several of those problems are present here too, along with others, thus creating value as a broad tour of the blatant biases and slipshod work that permeate tobacco control, as well as the low quality of what passes for qualitative public health research more generally.

The authors reviewed 94 journal articles (actually just a few sentence of the abstracts) about vapor product chemistry or effects of in vitro exposures (an undifferentiated muddle of incommensurate papers, which is itself a fatal flaw, though totally overshadowed by far larger flaws). They claim to assess the conflict of interest (COI) of the authors of each paper, though they do not. Then they do some statistics and imply that having industry-related COI causes authors to… well, do something bad, I guess, though this neither stated nor supported by the analysis.

Just to get the most absurd flaw in the paper out of the way: In the title, the conclusion statement of the abstract, and elsewhere in the paper, the authors assert that many of the papers they reviewed claimed that vaping is harmless. I am not going to re-review their database to see, but I would be shocked if there were even two papers that said anything that could even be interpreted that way, and I would guess it is actually zero. Even a poor scientist would know enough to not claim that the one bit of chemistry or toxicology they assessed would support a claim of “harmless”. I would be shocked if there was a single paper by tobacco-industry-employed researchers in the entire modern literature that made that mistake, and fairly shocked if any researcher funded by a serious company was so clueless as to do so.

The authors’ own erroneous denial of COI…

Normally reviews in this collection mention the (seemingly inevitable) dishonest denial of COI by tobacco controllers as a brief aside at the end. But for a paper that is supposedly about COI, it is worth headlining. The authors claimed:

Conflicts of interest: CP and AMB state that they have no conflict of interest. NG has accepted several invitations (travel expenses, accommodation and conference fee) from the pharmaceutical industry to take part in international medical conferences in the last five years.

The paper itself makes clear that the authors have a serious anti-industry bias, which is a huge COI for any paper about vaping. Indeed, the content of the paper suggests this reaches the level of being a fatal problem for this particular analysis: it rendered them incapable of doing any legitimate work. But even if it is not that bad, it obviously is not “no conflict of interest.” The first author is an anti-vaping activist; she has written nothing of importance, but we can see from her anti-vaping testimony to the EU in 2013 and this recent commentary that she has long practiced anti-vaping activism, and is willing to make absurd claims in pursuit of that conflicting interest. [Update: Someone has noted in the comments that she has also previously acknowledged doing work for pharma.] The second author, Nina Godtfredsen, has written no apparent anti-vaping commentaries, but has long been involved with institutional tobacco control. The third author could probably get away with the “no COI” claim based on her lack of publications. But for all of them there is still obvious bias evident in the paper, as well as their financial COI described below.

…and the related failure to do their own research correctly.

The authors’ obliviousness to their own COIs, as demonstrated in the above dishonest “disclosure”, create the next fatal flaw in the paper itself, the notion that a large portion of authors had no conflict of interest. The authors assessed only whether the authors of the papers in their dataset had any funding relationship with industry, or with one of (only) three anti-vaping funders (NIH, FDA, and WHO). But, as anyone who actually understands the concept of COI knows, funding is only one of many COIs, and is seldom the most important. Everyone writing in this space has non-financial COIs. Moreover, the types of funding relationships described in papers’ front- and end-matter are only one of many financial COIs, and are often not the most important. For example, a one-off grant to write a paper is a far less important financial COI than working for a university department that, for other projects, gets substantial ongoing funding from an interested agency.

Also, in case you have an inclination to give the authors credit for trying to assess the effects of NIH/FDA/WHO funding (why just those three??), note that they wrote, “Ten out of 63 studies without a conflict of interest were funded by NIH, FDA or WHO” (emphasis added). That is just funny.

This error only became fatal upon publication. If anyone who understood the concept of COI had been involved in editing or reviewing this paper for the journal, or merely been asked to comment on a manuscript, they could have fixed the problem: They would have just told the authors to replace the absurd “no conflict of interest” category and language with “no industry funding” or some other description of what they actually claimed(!) to have coded. But apparently no one with any expertise in the titular subject matter ever read the paper before it appeared in a journal.

That would have saved the authors from broadcasting how little they understand COI quite so loudly. However, it would not have solved the next layer of fatal error, in which they fail to understand even financial COI. Their methodology for assigning COI scores is opaque, and basically could be described as “what the first author thought the coding should be.” There are some words, but they are a muddle. There is no clue exactly what the key word “sponsored” means (let alone “partially sponsored”), and no indication which entities counted for which category (would FSFW count as “the tobacco industry” in their minds? probably, but they do not say). There is a vague explanation for what information was used, and it appears to just be the front- and end-matter of the paper (funding acknowledgments, institutional affiliations), or that in other papers by the same authors for publications that omit that information. Presumably the authors could provide some clarification if asked, so the sloppy reporting of the methods is mostly a testament to how unserious both the authors and the journal process were.

Financial COIs are not homogeneous.

The bigger problem here is that the authors apparently do not understand how different types of funding create different COIs. A one-off grant, actually working for CDC or PMI, and an ongoing center grant are very different. An investigator-initiated project, whatever funding it manages to get, is very different from a funder issuing a highly-specific RFP or contract. An unrestricted pool of research money is different from a funded predefined research program. Figuring out exactly how to score these differently would be tricky, but the authors seem oblivious to the fact that you need to at least try.

Also, funders vary. It is difficult to imagine FDA renewing a center grant if the researchers produced a series of papers that undermined FDA’s political agenda, and tobacco control funders have a history of cutting off funding when they do not like someone’s results. It is equally difficult to imagine a major tobacco company cutting off someone’s funds for publishing an inconvenient result. Thus, external grants from some funders create huge conflicting interests to support an agenda, while other funders — major tobacco companies in particular, who would not dare do so even if they were inclined — are very unlikely to impose pressure for particular results.

Chances are that these authors (and others who think this paper has any value) are unaware of this fact. If they are aware, their political views (i.e., their COIs!) presumably would result in them ignoring it. But even conceding that, it is still a fatal error to just lump all financial relationships into the categories “fully sponsored” research, “partially sponsored”, and other funding relationships. Indeed, the latter category in this ordered ranking is potentially more influential for someone who is inclined to cook her results to please a funder. Which is more enticing, getting your department a few thousand dollars to hire a student RA, or getting an personal honorarium and expenses to jet off to a conference to present the results?

Then there are the various financial COIs that do not appear in the end-matter. As already noted, an operation (e.g., university department) may be very beholden to funding from a particular political faction in the tobacco wars (read: from institutional tobacco control), regardless of the funding for the current paper. It seems safe to assume that these authors know that if they had they had actually written the analysis they pretended to, genuinely assessing the COIs of paper authors (assessing commentaries they had published; looking into their departments’ funding; etc.) and reporting them (e.g., noting that some of the authors are anti-vaping activists), then they would have been off the tobacco control gravy train for the rest of their careers. Even more so if they had assessed the quality of the research rather than just glancing at the abstracts. Grant funding for a paper does not create a financial COI — the money is already pocketed. The grant funding you want for your next paper, or for the rest of your career, does. (Past funding may create a COI in terms of disposition or attitude, but these authors were ignoring all such COIs.)

One subtle problem that is baked into the methodology, hardly worth mentioning given the major problems, is that the authors chose to code papers the same if all the authors had some affiliation or just one of them. Good independent research projects tend to assemble ad hoc groups of authors, with a reasonable chance that at least one is expert enough to have consulted for industry, so they would get coded as if they were industry projects.

The authors make no attempt to assess the quality of the papers they reviewed.

But let’s move on and counterfactually imagine having a useful measure of the COI that potentially affects each paper in some collection. What should someone do then? The most useful task would be to look at the methodologies to see if the study designs (what questions were being asked, what apparatus were used and how, quantities, etc.) seemed to be biased in ways that would advance the apparent conflicting interests. In particular, it would have been interesting to know which papers have been lambasted in public comments for their methodological failures (e.g., the ones in this collection that overheated the coils, thereby producing a cocktail of nasty chemicals that no one would ever vape). But that is not what the authors did. Indeed is apparently way over their heads. They presumably lack the understanding of the relevant science to assess the methods themselves (suggesting that perhaps they had no business undertaking this project without a coauthor who could), and it seems unlikely they are even capable of assessing which third-party criticisms of the methods are valid.

So instead they claim(!) to have looked at the results. That would not be useless, and writing “the authors really should have written this other paper I would rather read, even though it would have been beyond their ken” is never great in a review. But it should be recognized that this is a clearly inferior approach: Results differ from true values of what is purportedly being measured due to a combination of identifiable study design biases, hidden biases that even the study authors might not be able to recognize, and random error.

They did not even really look at the results.

Except they really did not look at the results. They really just looked at the conclusion sentence in the abstract. First, only the abstracts were reviewed. The authors rationalized this lazy approach by claiming it was done not because they only had two days to spend on this, but to represent the “real-life scenario” of most readers only looking at the abstract. That is actually useful to analyze and report, though it is inexcusable that they made no attempt to also assess whether what was reported in the abstract accurately represented the results of the study. Abstracts that misrepresent the results are, of course, a common manifestation of anti-tobacco COIs, and thus a particularly good way to assess whether COI influenced the reporting. Thus, the representation by the authors that they assessed the results of the studies is simply false.

The second and third authors — who appear to be unqualified to answer the question in most cases — coded each abstract for “Do the results indicate potential harm to health?” This is a scientifically illiterate question when asked without quantification; any analytic chemistry result and any toxicology result other than “no effect whatsoever was detected” (which presumably never happened) will contain information about some potential harm to health. Anyone who would ask this question (let alone sometimes answer “no”) is clearly unqualified to do this analysis. Presumably they were really just answering this question based on the stated conclusions, not the results. Thus, their two coding questions really become a one question with an ad hoc element of quantification (i.e., the first is just a less sensitive version of the second).

The second question they coded, which is clearly what they were really interested in (and, sadly, genuinely closer to the typical know-nothing “real-life” method of reading practiced by supporters of tobacco control) was “What are the conclusions? (1. Concern that e-cigarettes might harm users’ health or public health; 2. No concern that e-cigarettes might harm users’ health or public health/recommend them as harm reduction strategies or 3. Unclear).” But as anyone who reads the tobacco control literature knows, such statements are just throw-away editorializing that seldom have anything to do with the actual research results. The only legitimate statement of concern or lack thereof would be “harmful levels of X were [not] detected in this study” or “the cellular effects detected in this study are [not] generally believed to represent real health risks.” It is slightly interesting to parse COIs against the usual editorializing to look for an association (spoiler: tobacco controllers throw in unsupported political editorializing in everything they write; industry researchers stick to the facts). But that, which is all they actually did, was not what these authors pretended to be doing.

The authors repeatedly make a big deal about how these assessments of the abstracts were blinded regarding authorship. This is comical. You would have to be dumb as a rock to not recognize the difference between a just-the-facts abstract written by a careful industry research team and the political screechings that are written by tobacco controllers. Yes, there is some middle ground, but not much.

Moreover, the stated coding for the second question is also scientifically illiterate. Anyone who does a single study of the types analyzed and draws any conclusion that would actively indicate “no concern” about exposure to vaping in general should be condemned for that, and perhaps that should be blamed on COI. Presumably approximately none of the abstracts actually said that. The stated “concern” that is coded as 1 (especially the bit about “public health”, which would require social science analysis, not just chemistry) is clearly just a measure of anti-vaping editorializing. It is possible to make a chemistry or toxicology discovery that raises alarm about some exposure, of course, but it happens that in the case of vaping there have been no such discoveries. Thus any stated “concern” is purely political.

In other words, this coding basically divides the world into authors who were actively editorializing (on the anti side) and those who just reported the facts. Whatever funding history makes someone more likely to fall into the latter category (spoiler: that would be avoiding institutional tobacco control funding) should be commended.

If the other aspects of the methods were not so unserious, it would be important that there is a big difference between where someone looks and what they do, a distinction the authors seem oblivious to. That is, a researcher that intended to concoct a politically favorable result might design the labs they are running to do that, or game how they report the results. Or they might just choose to assess something where they already know they will like the results. For example, an anti-vaping activist might choose to overheat the coil to produce a nasty cocktail, or he might just decide to observe whether there are detectable molecules of diacetyl or nitrosamines — already knowing there are — with the intention of spinning those trivial quantities as harmful. Alternatively, authors might do a study of COIs in which they design their coding to ensure that anti-vaping authors are said to have no COI, or they might just choose to observe whether high-quality industry research tends to produce reassuring results — already knowing it does — with the intention of spinning that as biased. (Or they might do both.) The omission of any attempt to assess how much of each of these is happening is pretty minor, given the other flaws, and is probably another “beyond their ken” point. But someone who was trying to do a serious version of the ostensible research would have done it.

Results (one is actually interesting!), unsupported conclusions, and the bizarrely absent conclusion.

The authors’ interpretation of the results from this train wreck methodology are hardly worth mentioning, let alone their silly use of statistical tests for the resulting trivial crosstabs (“hey, look at us, we is doing real scientifics!”). They dichotomized each of their codings, presumably in the way that produced the most dramatic results, but at this point who even cares about subtle clever ways of biasing the results.

Naturally, these “no conflict of interest” authors (LOL) concluded — after producing what is basically the observation that only non-industry studies tend to include absurd conclusions — with further absurdity: “a strong association between an industry–related conflict of interest and tobacco/e-cigarette industry–favourable results, indicating that e-cigarettes are harmless.” As already noted, presumably no one ever claimed the latter. Moreover, they never explain what they think an industry-favorable result even is, let alone make a case for why it is that. On top of that — and here is the craziest thing about this hot mess — they never actually claim that the COIs that (they assert without analysis) result from industry funding caused anything. Not once, here or anywhere else. It is not just that they do not propose a mechanism via which industry-associated researchers tried to get “industry-favourable [sic]” results (e.g., by only looking at the cleanest vapes or using too-small quantities). They do not ever even assert that those researchers tried to do so.

They literally wrote not a word about why the content of industry-associated abstracts might systematically differ from those of tobacco controllers. (The present review explains why; they did not even attempt to offer an alternative story.) They are so far down their rabbit hole of biased madness that they presumably think it goes without saying that the only possible explanation for the difference is that everything tobacco controllers write is valid and…? And what? Even if we pretend they made that counterfactual claim about tobacco control papers, they still fail to even assert that being associated with industry results in inaccurate study results. They are so blinded by their COIs that they do not even realize they forgot to say it. Of course, anyone who would take this paper seriously presumably lives in the same rabbit hole, so I suppose that does not matter much.

Their lack of ever even asserting that industry researchers do anything wrong, however, does not stop them from continuing with, “Some journals have already decided they will not publish tobacco industry–funded research. The present authors recommend all journals to follow in their footsteps.” Yes, that’s right. They made zero attempt to assess whether those studies used good methodology, or even whether they accurately represented their results. But then called for censoring them all because they did not inappropriately editorialize in these authors’ preferred direction. What more do you need to know about the COI of tobacco controllers?

Serious readers, unless they are trying to write a review, will probably not even bother to look at the reported results. But that would cause them to miss the one interesting result — interesting when assessed based on what was actually done and not what the authors pretend was done: “Analyses showed that there was no difference [sic: 94 and 100 are different] in findings of harm between studies funded by [NIH/FDA/WHO] (10/10, 100%) and studies without [sic: sarcastic] conflict of interest funded by other sources [sic: actual editing error] (48/51, 94.1%; p = 0.831).” The authors presumably do not realize that what they found is that authors they pretend have “no COI” were almost as likely to engage in anti-vaping political spin as those with the strongest financial incentives to do so. In other words, within the world of authors who avoid any industry affiliation (or are too unskilled to be invited to have one), the COIs that are driving their behavior are mostly non-financial. We already knew that, of course, but this is an interesting statistic in support of that.

Introduction and Discussion.

To round out this review: The Introduction is mostly just what you would expect, with random undergraduate-level discussion about vaping in general and various bits of ancient history, along with the usual pot-kettle ranting about COI. There are no references to any serious analyses of the concept of COI, nor a word about what the authors even think COI even is — unsurprisingly, since it is obvious they have no idea. Interestingly, however, there is a paragraph about how “contradictory” results in the literature may be driven by different methodologies. You might think that someone who wrote that would go on to actually look at methodologies, but no. (Also, the fact that they think of different results from different methodologies as “contradictory” is a deep indication of just how unsophisticated these authors are. They are basically telling us that they themselves are only capable of reading at that naive “real-life” level they used — i.e., they only understand asserted conclusions, not the actual science.)

The Discussion section is a remarkable self-own. This is not limited to the “strengths and limitations” paragraphs, though these might as well just have read “not only do we not understand what a legitimate analysis of this would have looked like, but we don’t even understand what we did.” They are utterly unaware of the actual limitations noted above (I would normally suggest that they might be pretending to be unaware, but in this case they probably really are). What they cite as strengths are equally absurd for reasons noted above.

They note how the different coding dimensions got similar results, not recognizing that this is because they were measuring the same phenomena. They complain, “No tobacco industry–related papers expressed concerns about the health effects of e-cigarettes.”, not recognizing that this means that (a) they did not find anything alarming because (as far as we can tell) there is nothing alarming to find if you use valid methods, and (b) unlike the “no COI” researchers, they are proper scientists who do not draw conclusions about outcomes (let alone policies) that were not assessed in their research. They suggest that their results are similar to those from other fields, apparently oblivious to the huge differences among those situations and literatures. And of course they liken their result to the completely dissimilar ancient history when cigarette companies did produce dishonest research. They ramble on about this for a while.

Their second biggest self-own is, “Very concerning is that the tobacco industry papers are cited more often than papers written by independent [sic] researchers.” Hmm, I wonder if there is a reason for that? It also turns out that the New York Review of Books is cited more often than the National Enquirer.

Their greatest self-own, however, is, “Penalties to authors who do not disclose [COI] correctly have been proposed….” Um, I have some bad news for you.

FTFY.

Finally, this is what the abstract of the paper should have said, based on the above analysis:

In the vaping research space, the major corporations can afford the highest-quality personnel and equipment, and pay researchers to focus and take the time to do the work carefully and correctly. They are under enormous pressure to make sure their methods and results are valid and replicable, and to carefully avoid engaging in political editorializing when reporting their study results. By contrast, university and government research generally relies on students or other low-cost researchers. Those authors are usually under serious time pressure and often working on multiple projects, but face basically no pressure to do legitimate work. They feel free to spin the results and editorialize about their personal political opinions, and there are no repercussions when their methods or results are demonstrated to be fatally flawed. In addition, most such researchers are dependent upon (or hope to become dependent upon) grants from anti-vaping agencies, which are likely to not be forthcoming if the papers do not support the political agenda.

We assessed whether authors of papers about vape chemistry engaged in political spin in their abstracts and cross-tabulated that against funding sources. Our research found that papers that did not have the benefits of industry funding were much more likely to make unsupported anti-vaping political statements. Presumably this was mostly driven by them lacking the pressure to report accurately and avoid political spin, though many might have also used flawed or biased methodology (which we did not attempt to assess).

It is impossible to separate political statements that were motivated by financial conflicts of interest from those motivated by personal political conflicts of interest. However, we did observe that among the abstracts that did not have the benefits of industry funding, there was little difference in political spin between those whose authors were known to be under financial pressure to produce anti-vaping results (because they receive funding from U.S. anti-vaping agencies or WHO) and the others. This suggests that the politicizing in those papers is driven primarily by non-financial conflicts of interest, as well as the general sloppiness of university research papers, and that anti-vaping funders are largely just rewarding such bias rather than causing it.

Science Lesson: Conflating age with inevitable temporality (i.e., some things first occur in youth merely because youth comes first)

by Carl V Phillips

A random science lesson, because I have not written a good “the conventional wisdom — how everyone looks at this and thinks is self-evidently true — is not the only plausible explanation” lesson in a while (other than tweet storms), and just want to. I was triggered on the topic by some chatter I saw about a recent paper, though neither of those is particularly important (so no links).

Consider an example from another realm: A large portion of significant original contributions in theoretical mathematics are figured out, or at least the seeds are completed, when the author is under 25-years-old, or even under 20. The conventional wisdom is — or was (I have been out of that field for a long time) — that people’s sheer physical brainpower in this area declines with age, and that this is the only time someone has the ability to outperform all who have come before them. It is like being a professional athlete. You can be a perfectly solid athlete or science geek at 60 if you have the natural skills and keep at it, but to be among the absolute best — among the 0.001% who can be a performance-level jock or breakthrough mathematician — you have to have both the natural skills and be at your lifecycle physical peak.

But there is a plausible alternative theory that was pointedly ignored in that conventional wisdom: Generations of mathematicians have already worked out everything, within the bounds of what occurs to them to work on, that can be done by just plugging away at it. Therefore, new breakthroughs only come when someone is wired enough differently to see something beyond that, either in terms of recognizing something outside the existing bounds to pursue or some striking insight into a within-bounds problem. That is, they need to not just be solid in the skills of the field, but have one little cognitive quirk that no one else had. Either they have that when they are 16 or they don’t. If they do, they make their breakthrough early because they can. It is not about age — if one was somehow prevented from making the breakthrough for a couple of decades (but managed to keep up his skills in the field and was not scooped), he would have made it later.

Perhaps the relative contributions of those two factors has been largely resolved — as I said, I have been out of that area a long time. In contrast with the tobacco realm, most everyone who is aware of that debate is a smart clear thinker, so they may have long since worked out how much each of the stories explains the association of age and breakthroughs. But the point is that the naive explanation for something being associated with age — that it must have been entirely caused by age itself — was not so obviously correct as the conventional wisdom had it.

This is a metaphor, of course, for all the claims about tobacco use initiation, habituation, “addiction”, and such that are attributed to age because they are associated with age. This is a fail for exactly the reason found in the alternative theory of math prodigies: If something were able/likely to happen sometime in someone’s life, but not in most people’s, the fact that it happened early among the former (because it could) is not informative.

So we have the conventional wisdom that because smokers (etc.) mostly start fairly early in life, if you stop them from starting early, they never will. This is undoubtedly true to some extent. Everyone gets more set in their ways about what they do and do not do after adolescence. For smoking specifically, having adult-level judgment and a more forward-looking mindset makes it much less appealing (though this is not true for low-risk and potentially net beneficial smoke-free products). But it is obviously not nearly as true as is generally claimed. Someone who would have used a product at 16, but is somehow kept from doing so for two years does not magically revert to having the average lack of interest (which means being below the line for inclination to use the product) at 18. The same is true if you substitute age pairings 18…21 or even 16…40.

My goal here is to just immunize readers against the common naive error by planting the idea, so I am not going to delve deeply into the data. But just notice that transitioning to “smoker” status has gone down sharply among 14-year-olds in the US population, but not 18-year-olds. It is down overall, of course, but it is impossible to not notice that some of the “success” at earlier ages consists of delay rather than elimination. If the conventional wisdom were true, we should not have seen the sharp rise in the average age for that transition; the conventional wisdom says that the people who are pulling that average up do not exist.

The issue is clearer still for claims about early-initiating smokers (etc.) being more habituated (usually called “addicted” of course, but my readers will understand why that is bullshit rhetoric). If there is any variation within the population in terms of who is inclined to become strongly habituated — and obviously there is, due to both biological and social factors — then of course we see this. Those who are most inclined quickly become regular consumers upon first trialing at, say, 13. Those eventual-smokers (etc.) who ramp up more slowly were not so enamored, and so waited until it was easier to do. The former group are undoubtedly less likely to quit, have higher “dependence” scores, etc. The rhetoric attributes all of this obvious confounding to causation.

This does not means that there is no biological effect of early smoking (etc.) that causes greater inclination later in life, of course. But it does mean that the main body of evidence deployed in support of that claim is worthless. My readers presumably understand that the evidence deployed in support of “gateway” claims is bullshit because it merely observes the inevitable association across individuals choosing to use very similar products. Any association that is inevitable due to confounding cannot be said to be evidence of any causation without further serious analysis, analysis that tobacco control “researchers” never do. The present case is a bit more subtle than the gateway case, but it is exactly the same problem.

Similarly, these observations do not mean that somehow preventing an incidence of initiation at 16 is always just be a delay rather than permanent prevention. There is some probability of each. There is ample reason to believe that the probability of mere delay is fairly high. Yet the claims based on the observed association almost always bake-in the unstated and unexamined assumption that the probability of it being mere delay is approximately zero.

I did not become a regular drinker until my 30s, or a regular user of nicotine products and sometimes [redacted because we live in a fucked-up anti-liberty police state when it comes to stuff like this] until later still. But I trialed all of these before I was 20 and did a bit during my 20s. Those who want to say “it is all about ‘youth’ initiation!!!” will spin this into supporting their claims. Look closely at their claims and you will see that most of them would attribute my later behavior to those largely forgotten moments from adolescence. I can tell you there was no causal continuity between the trailing and later period of ongoing use, except via the confounding pathways. Granted I am a bit unusual — I have taken up quite a few things at time in my odd life that very few people ever do if they do not start at a much younger age: professional popular writing, various sports, farming, having babies. But the oddity there just illustrates the point that acting upon willingness or interest gets mistaken for causation, because willingness and interest are usually not kept latent for so long.

Consider one more metaphor that illustrates a different angle on this: adults who choose to visit Disney World (i.e., because they like to, not just because they are roped into taking their kids). There is undoubtedly a huge association between this and having visited as a child. Undoubtedly it is causal to some extent, but it would be obviously stupid to assume the association is all causal. Among those negative for both traits are those with a religious or semi-religious objection to visiting, those who disdained the idea as children (often due to their particular subculture think of it as belonging to Others), and those for whom making the trip is unaffordable. Those traits tend to be fairly persistent through the lifecycle, and this alone creates an association. Among those positive for both traits are those who just love stuff like that, and so pushed their parents to taken them and later choose to go again when they could. This increases the association with no causation in sight yet. Finally, among those positive for both are those who go back because they remember how much they enjoyed it as kids, the causal group. The “logic” of the tobacco control literature and rhetoric would be to claim that the association is caused entirely by the latter group.

I would assume that the marketing people at DisneyCorp — who are presumably much better at their jobs than most tobacco researchers and pundits are — have this all worked out and make extensive use of that knowledge. It would undoubtedly be possible to form honest estimates that separate the contributions of causation-by-age and mere temporality in the tobacco space also. But few in that space even recognize this is an issue, and most of them want to pretend it is not, and few of them have the skills to do the (actually pretty simple) analysis to try to sort it out.

It is one more persistent set of lies (partially intentional, partially due to Dunning-Kruger) to be aware of when analyzing tobacco control claims.

My recent contribution to Clive’s weekly reading list

by Carl V Phillips

As some of you know, Clive Bates puts out a weekly somewhat-annotated list of PubMed-indexed articles that are related to low-risk tobacco products and/or tobacco harm reduction (the search string for that appears at the end of what follow). It is a great resource; if you do not receive it, I am sure he would be glad to add you to the distribution list. As part of a planned projected that I have alluded to before, I am working on how to reinterpret this as an annotated weekly suggested reading (or knowing-about) list. To that end, this week I was a “guest editor” for Clive’s distribution list, and I thought I should share what I wrote here to broaden the audience. Yes, it is a little weird to publish a one-off “weekly reading” that is mostly based on an existing format that you might not be familiar with. But you should be able to get the idea. Hopefully I will be producing one every week before too long.

In the meantime, here is what I wrote that went out via Clive’s distribution lists. Sorry for the weird formatting — it is an artifact of the way the original PubMed search was formatted. Yes, I could have fixed the for aesthetics to re-optimize for this blog’s formatting, but since they do not hinder comprehension, I am not going to bother — sorry.


Greetings everyone. Carl V Phillips here, doing Clive’s list this week. I am trying out a new format for it, as follows: (1) They are not listed in the order that popped from the PubMed search string, but rather is in order of how worth reading they are. Obviously this is my own rough blend of various considerations, including importance of what is being addressed, value of what was produced, how potentially influential it is, and how much reader effort it takes to get value from it (note that I put relatively little weight on the latter). I have left the serial numbers from the search on the entries in case anyone wants to recreate the usual ordering. I add a full-text link if I think there is anyone other than specialists in the particular area would want to look at the full text. (2) I am not limiting this to PubMed-indexed papers. I am including popular press and policy statements (and would have included blogs but there were not any apparent candidates this week).

Continue reading

Weekly reading: ~20 Nov 2018

Something about this post (the title and thus the URL, I guess) made it so half my readers could not access it. So I replaced it with an exact duplicate here. I am leaving this here as a placeholder for those who do navigate to it, but deleting the duplicate content.

Peer review of: Linda Johnson et al. (Washington U med school), E-cigarette Usage Is Associated with Increased Past 12 Month Quit Attempts and Successful Smoking Cessation in Two U.S. Population-based Surveys, NTR 2018.

by Carl V Phillips

For an overview of this collection and an explanation of the format of this post, please see this brief footnote post.

The paper reviewed here is available at Sci-Hub. The paywalled link is here.

This collection will focus mainly on the misleading anti-THR papers produced by tobacco controllers. However, it is useful and important to provide reviews of potentially important paper that might be called pro-THR. This is one example of a paper that has gotten a lot of “ha, take that!”-toned traction.

If a “pro-THR” paper is tight, a review will provide a substantive endorsement, as positive reviews should do (but as the anonymous and secret — and presumptively poor-quality — journal reviews cannot do), as well as a signal boost. If a paper is useful but importantly flawed (as in the present case), the review can correct or identify the errors and focus attention on the defensible bits. And if the paper is fatally flawed, the review should point that out. Bad advice is still bad advice when it feels like it is “on your side”. Even when a paper basically only provides political ammunition and not advice, it is important to assess its accuracy. We are not tobacco controllers, after all, who just make up whatever claims seem to advance their political cause.

—-

Johnson et al. use historical nationally-representative U.S. tobacco use data (NHIS from 2006 to 2016 and CPS over most of that period), for 25- to 44-year-olds, looking at the rate of smoking quit attempts and the association between vaping status and quit attempts or successful smoking abstinence. The authors report an unconditional increase in the population for both quit attempts (measured as a the rate of past-year incidence among people who smoke) and medium-term smoking abstinence. They also report a positive association between vaping and smoking quit attempts and abstinence at the individual level. They interpret their results as running contrary to the recent spate of “vapers are less likely to quit” claims, stating “These trends are inconsistent with the hypothesis that e-cigarette use is delaying quit attempts and leading to decreased smoking cessation.”

This is an overstatement, but the results do run contrary to the “vaping is keeping smokers from quitting” trope that the authors position their paper as a response to. This research clearly moves our priors a bit in the direction of “yes, vaping encourages people to quit smoking, and helps them do so.” Our priors only move “a bit” because rational beliefs based on all available evidence tell us we should be very confident of that conclusion already. They should instead have said something like “even if you naively believe in those methods, for this data the result is different”, but such (appropriate) epistemic modesty is absent.

The paper is quite frustrating in that the authors seem to not recognize which of their statistics are actually most informative and persuasive, let alone take the deeper dive into specific implications that could have been done. The natural experiment interpretation of some of the results is more compelling than the behavioral-association-based analysis (see below). The authors overstate the value of their association statistics and effectively endorse the same flawed methods that are the source of the “vapers are less likely to quit” literature. Continue reading