FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (part 2)

by Carl V Phillips

In the previous post, I gave some background about the new proposed rule from FDA’s Center for Tobacco Products (CTP) that would cap the concentration of the tobacco-specific nitrosamine (TSNA) known as NNN allowed in smokeless tobacco products (ST). Naturally, I think you should read that post, but to follow the scientific analysis which begins here, you do not need to.

Before even getting to the even worse nonsense about NNN itself, it is worth addressing CTP’s key premise here: They claim that ST causes enough cancer risk, specifically oral cancer, that reducing the quantity of the putatively carcinogenic NNN could avert a lot of cancer deaths.

Readers of this blog will know that the evidence shows ST use does not cause a measurable cancer risk. That is, whatever the net effect of ST use on cancer (oral or otherwise), it is not great enough to be measured using the methods we have available. That does not necessarily mean it is zero, of course. Indeed, it is basically impossible that any substantial exposure has exactly zero (or net zero) effect on cancer risk. But even if all the research to date had been high-quality and genuinely truth-seeking — standards not met by much of the epidemiology, unfortunately — there is no way that we could detect a risk increase of 10% (aka, a relative risk of 1.1) or, for that matter, a risk decrease of 10%. Realistically, we could not even detect 30%. For some exposure-disease combinations it is possible to measure changes that small with reasonable confidence (anyone who tries to tell you that all small relative risk estimates should be ignored does not know what he is talking about). But it is not possible for this one, at least not without enormously more empirical work than has been done.

Despite that, FDA bases the justification for the rule on the assumption that ST causes a relative risk for oral cancer of 2.16 (aka, a 116% increase), or a bit more than double. This eventually leads to their estimate that 115 lives will be saved per year. Before even getting to their basis for that assumption, it is worth observing just how big this claimed risk is. (I will spare you a rant about their absurd implicit claims of precision, as evidenced in their use of three significant figures — claiming precision of better than one percent — to report numbers that could not possibly be known within tens of percent. I wrote it but deleted it and settled for this parenthetical.)

A doubling of risk, unlike the change of 10% or 30%, would be impossible to miss. Almost every remotely useful study would detect an increase. Due to various sources of imprecision, some would have a point estimate for the relative risk of 1.5 (aka, a 50% increase) and some 3.0, but very few would generate a point estimate near or below 1.0. Yet the results from most published studies cluster around 1.0, falling on both sides of it.

You would not even need complicated studies to spot a risk this high. More than 5% of U.S. men use smokeless tobacco. The percentages are even higher, obviously, for ever-used or ever-long-term-used, which might be the preferred measure of exposure. This would show up in any simple analysis of oral cancer victims. With 5% exposed, doubling the risk would mean about 10% of oral cancer cases among nonsmoking males would be in this minority. A single oral pathology practice that just asked its patients about tobacco use would quickly accumulate enough data to spot this. It is not quite that simple (e.g., you have to remove the smokers, who do have higher risk) but it is pretty close. The point is that the number is implausible.

In Sweden, ST use among men is in the neighborhood of 30% (and smoking is much less common). A doubling of risk for any disease that is straightforward to identify, like oral cancer and most other cancers, would be much more obvious still. But no such pattern shows up. The formal epidemiology also shows approximately zero risk. Most of the ST epidemiology is done in Swedish populations, basically because relatively common exposures are much easier to study.

So how could someone possibly get a relative risk estimate of more than double?

The answer is that they created the absurd construct, “all available U.S. studies” and then took an average of all such results. (They actually used someone else’s averaging together of the results. They cite two papers that did such averaging and — surprise! — chose the higher of the results, though that hardly matters in comparison to everything else.) This is absurd for a couple of reasons which are obvious to anyone who understands epidemiologic science, but not so obvious to the laypeople that the construct is designed to trick.

You might be thinking that it is perfectly reasonable to expect that different types of ST pose different levels of risk. Indeed, that seems to be the case (however, the difference is almost certainly less than the difference among different cigarette varieties, despite the tobacco control myth I mentioned in Part 1, the claim they are all exactly the same). But nationality obviously does not matter. Should Canadian regulators conclude that nothing is known about ST because there are no available Canadian studies? This is like assessing the healthfulness of eating nuts by country; the difference is not about nationality but mostly about what portion of those nuts are peanuts (which are less healthful than tree nuts). If the category of nuts is to be divided, the first cut should be health-relevant categories of nuts, not nationality. Nutrition researchers and “experts” are notoriously bad at what they do, but few would make this mistake like FDA did.

The error is particularly bad in this case: It turns out the evidence does not show a measurable difference in risk between the products commonly used in the USA and those commonly used in Sweden. The data for all those is in the “harmless as far as we can tell” range. But it appears that an archaic niche ST product, a type of dry powdered oral snuff, that was popular with women in the US Appalachian region up until the mid-20th century, posed a measurable oral cancer risk. It turns out that a hugely disproportionate fraction of the U.S. research is about this niche product — disproportionate compared to even historical usage prevalence, let alone the current prevalence of about nil. There is nothing necessarily wrong with disproportionate attention; health researchers have perfectly good reasons to study the particular variations on products or behaviors that seem to cause harm. Also, it is much easier to study an exposure if you can find a population that has a high exposure prevalence, in this case Appalachian women from the cohorts born in the late 19th and early 20th centuries.

It is not the disproportionate attention that is the problem. The problem is the averaging together of the results for the different products. Even if that might have some meaning if the average were weighted correctly, it was very much not weighted correctly.

The 2.16 estimate was derived using the method typically called meta-analysis, though it is more accurately labeled synthetic meta-analysis since there are many types of meta-analysis. It consists basically of just averaging together the results of whatever studies happen to have been published. Even in cases the are not as absurd as the present one, this is close to always being junk science in epidemiology. The problems, as I have previously explained on this page, include heterogeneity of exposures, diseases, and populations, which are assumed away; failure to consider any study errors other than random sampling error; and masking of the information contained in the heterogeneity of the results. To give just a few examples of these problems: Two studies may look at what could be described in common language as “smokeless tobacco use”, but actually be looking at totally different measures of quite different products. Similarly, one study might look at deaths as the outcome and another look at diagnoses, which might have different associations with the exposure. A study might have a fairly glaring confounding problem (e.g., not controlling for smoking), but get counted just the same, obscuring its fatal flaw as it is assimilated into the collective. One study might produce an estimate that is completely inconsistent with the others, making clear there is something different about it, but it still gets averaged in.

But beyond all those serious problems with the method in general, all of which occur in the present case, this case is even worse. It is worse in a way that makes the result indisputably wrong for what FDA used it for; there is simply no room for “well, that might be a problem but…” excuses. It is easy to understand this glaring error by considering an analogy: Imagine that you wanted to figure out whether blue-collar work causes lung disease. This might not be a question anyone really wants an answer to, but it is still a scientific question that can be legitimately asked. Now imagine that to try to answer it, you gather together whatever studies happen to have been published in journals about lung disease and blue-collar occupations. As a simplified version of what you would find, let us say that you found two about coal miners, one about Liberty ship welders, one about auto body repair workers, one about secretaries, and two about retail workers. So you average those all together to get the estimated effect on lung disease risk of being a blue-collar worker.

See any problem there? If you do, you might be a better scientist than they have at FDA.

Obviously the mix of studies does not reflect the mix of exposures. Why would it? There is absolutely no reason to think it would. Notwithstanding current political rhetoric, only a miniscule fraction of blue-collar workers are in the lung-damaging occupations at the start of the list. The month-to-month change in the number of retail jobs exceeds total jobs in coal mining. But the meta-analysis approach is to calculate an average that is weighted by the effective sample size of each study, with no consideration of the size of the underlying population each study represents. The proper weighting could easily be done, but it was not in my analogy nor in the ST estimate FDA used (nor almost ever). If all the studies in our imaginary meta-analysis have about the same effective size, this average puts more weight on the <1% of the jobs that cause substantial risk than the majority that cause approximately zero risk. (Assume that you effectively controlled for smoking, which would be a major confounder here creating the illusion that even harmless blue-collar jobs cause lung disease, as is also a problem with ST research).

As previously noted, it is not only possible, but almost inevitable that studies will focus on the variations of exposures that we believe cause a higher risk. No one would collect data to study retail workers and lung disease. If they have a dataset that happens to include that data, they will never write a paper about it. (This is a kind of publication bias, by the way. Publication bias is the only one of the many flaws in meta-analysis that people who do such analyses usually admit to. However, they seldom understand or admit to this version of it.)

It turns out that this same problem is no less glaring in the list of “all available U.S. studies” of ST. In that case, about 50% the weight in the average is on the studies of the Appalachian powered dry snuff[*], which accounts for approximately 0% of what is actually used. Indeed, the elevated risk from the average is almost entirely driven by a single such study (Winn, 1981), which is particularly worth noting because this study’s results are so far out of line with the rest of the estimates in the literature. A real scientific analysis would look at that and immediately say that study cannot plausibly be a valid estimate of the same effect being measured in the other studies; it is clearly measuring something else or the authors made some huge error. Thus it clearly makes not sense to average it together with the others.

Note:
[*] As far as we can tell. The methods reporting in the studies was so bad — presumably intentionally in some cases — that they did not report what product they were observing. We know that the Winn study subjects used powdered dry snuff because she admitted it in a meeting some years later, and this was transcribed. She has made every effort to keep that from getting noticed in order to create the illusion that the products that are actually popular cause measurable risk. For some of the other studies we can infer the product type from gender and geography (i.e., women in particular places tended to be users of powdered dry snuff, not Skoal).

It is amusing to note what Brad Rodu did with this. Recall that the over-represented powered dried snuff was used by Appalachian women. So effectively Brad said, “ok, so if you are going to blindly apply bad cookie-cutter epidemiology methods rather than seeking the truth with scientific thinking, you should play by all the rules of cookie-cutter epidemiology: you are always supposed to stratify by sex” (my words, not his). It turns out that if you stratify the results from “all available U.S. studies” by sex (or gender, assuming that is what they measured — close enough), there is a huge association for women (relative risk of 9) and a negative (protective) association for men. ST users in the USA are well over 90% male. Brad has some fun with that, doing a back-of-the-envelope to show that if you apply that 9 to women and zero risk to men, you get only a small fraction of the supposed total cases claimed by FDA. And this is a charitable approach: If you actually applied the apparent reduced risk that is estimated for men, the result is that ST use prevents oral cancer deaths on net.

Notice that in my blue-collar example, you would also get a large difference by sex, with almost all the elevated risk among men. Of course, there is no reason to expect that sex has a substantial effect on either of these, or most other exposure-disease combinations. Results typically get reported as if any observed sex difference is real, but that is just another flaw in how epidemiology is practiced. The proper reason for doing those easy stratifications is to see if they pop out something odd that needs to be investigated, not because any observed difference should be reported as if it were meaningful. When there is a substantial difference in results by sex for any study where the outcome is not strongly affected by sex (e.g., not something like breast cancer or heart disease), it might really be an inherent effect of sex, but it is much more likely to be a clue about some other difference. Maybe it shows an effect of body size or lifestyle. Or perhaps the “same” exposure actually varied by sex. In the ST and blue-collar cases, we do not have to speculate: it is obvious the exposure varied by sex.

The upshot is not actually that when assessing the average effect, you should stratify the analysis by sex (though it is hard not to appreciate the nyah-nyah aspect of doing that). It is that averaging together effects of fundamentally different exposures produces nonsense. If there is a legitimate reason to average them together (which is not the case here), the average needs to be weighted by prevalence of the different exposures, not by how many studies of each happen to have appeared in journals.

It gets even worse. I put a clue about the next level of error in my blue-collar example: the shipyard welders worked on Liberty ships. In the 1940s, ship builders had very high asbestos exposures, the consequences of which were not appreciated at the time. Today’s ship welders undoubtedly suffer some lung problems from their occupational exposures, but nothing like that. Similarly, regulations and better-informed practices have dramatically reduced harmful exposures for coal miners and auto body workers. In other words, calendar time matters. Exposures change over time, and the effects of the same exposure often change too, with changes in nutrition, other exposures, and medical technology. There are no constants in epidemiology. (That last sentence, by the way, a good six-word summary of why meta-analyses in health science are usually junk.)

One of the meta-analysis papers FDA cites breaks out the study results between studies from before 1990 and after that. It turns out that the older group averages out to an elevated risk, while that later ones average out to almost exactly the null. This is true whether you look at just U.S. studies or studies of all Western products. Does this mean that ST once caused risk, but now does not? Perhaps (a bit on that possibility in Part 3). Some of it is clearly a function of study quality; I have poured over all those papers and some of the data, and the older ones — done to the primitive standards of their day — make today’s typical lousy epidemiology look like physics by comparison. A lot of this difference is just a reprise of the difference between the sexes: the use of powdered dry snuff was disappearing by the 1970s or so (basically because the would-be users smoked instead). In case it is not obvious, if you have a collection of modern studies that show one result and a smaller collections of older studies that show something different, you should not be averaging them together.

In short, a proper reading of the evidence does not support the claim that ST causes cancer in the first place. But even if someone disagrees and wants to argue that it does, that 2.16 number is obviously wrong and based on methodology that is fatally flawed three or four times over. That is, even if one believes that ST causes oral cancer, and even he believes it could even double the risk (setting aside that such a belief is insane), relying on this figure makes the core analysis that justifies this regulation junk science.

The next post takes up the issue of NNN specifically.

Advertisements

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (part 1)

by Carl V Phillips

I am a bit late to analyze this proposed FDA rule, which was promulgated on Inauguration Day. But it is still open for comments, and I will be submitting these posts (though for reasons I will get to shortly, these and all other comments are probably moot except as for-the-record background).

Before getting to the substance it is worth noting that this is really the first bit of genuine regulation proposed by the FDA Center for Tobacco Products (CTP) in its eight years. Despite CTP reportedly approaching $4 billion in cumulative expenditures, it has only implemented a few inconsequential rules that were specifically required by the enabling legislation, and has never actually created a standard or specific requirement like a real regulator. Instead, everything it has done has been what I have dubbed weaponized kafkaism. The variation on the word “kafkaesque” refers, of course, to Kafka’s horror stories of bureaucratic (in the pejorative sense) rules that create injustice via impossible procedural burdens. “Weaponized” refers to turning something that is harmful but not malign into a tool for intentionally inflicting harm. CTP has turned filing and paperwork hurdles into a weapon. Continue reading

Time to stop measuring risk as “fraction of risk from smoking”?

by Carl V Phillips

I ran across a tweet touting a press release out of the Global Forum on Nicotine (GFN) meeting (a networking meeting, mostly of e-cigarette boosters) that made the claim that snus is 95% less harmful than smoking. This was variously described as being based on “new data”, “new data analysis” and “the latest evidence”, but with no further explanation of where the number came from. Since the presenter was Peter Lee, those of us who know who’s who can surmise that it is a statistical summary of existing published studies, because that is what Peter does. There is nothing necessarily wrong with that (though for reasons I will explain in an upcoming post, it is potentially suspect in this context). but it is certainly not new data or the latest evidence.

Oh, and it is clearly wrong. Continue reading

Lying with literally true statements is the worst kind of lying

by Carl V Phillips

This is a reprise of points I have made here before, including in the mission statement of the blog. It was inspired by this recent post by Steven Raith in which he, a relative newcomer to the tobacco wars, describes his realization of just how often tobacco control’s lies consist of literally true statements. It is always nice to see people independently derive this observation, though I have been documenting that type of lie from “public health” (along with others) for most of two decades. Raith speculates that their use of such lies is increasing, but this does not seem to me to be the case; rather, once you become aware of the tactic, you notice it more. I will come back to the question of prevalence.

Lies include any communication that is intended to make the audience believe something the communicator knows is not true. Some lies are baldly false statements, like those that dominate POTUS’s lies. I have noted that this is a “welcome to our world” moment: Suddenly everyone found themselves trying to respond to falsehoods from the government that were so obvious that it is hard to get past just sputtering at them. This is exactly the government behavior that those of us in the tobacco wars and other drug wars have faced forever.

Lies also include many technically true statements that are clearly intended to make the audience believe something that is false. I find it useful to think of two main subcategories of these: First are statements that are the semantic equivalent of an optical illusion, which almost explicitly state the lie and trick the reader into reading in the lie. These are the statements that might cause a careful reader to react with, “I see what you did there.” An example is, “smokeless tobacco is not a safe alternative to cigarettes,” a statement that is technically true because it is vacuous (nothing is safe), but in intended to be read as “…not safer than….” Second are innuendo, where the true statement does not actually contain any version of the falsehood that it intentionally communicates, such as “smokeless tobacco contains arsenic.” The statement does not actually say “…and this makes it harmful,” but just counts on people’s scientific innumeracy to fill in “…and since arsenic is bad, anything that contains it must be bad.”

It is these innuendo lies that I personally despise the most. Not only do they seek to mislead people, but they intentionally take advantage of innumeracy that our supposedly-scientific agencies and organizations should trying to fight rather than embrace. Moreover, they tend to make that same innumeracy worse. The intended innuendo is that smokeless tobacco must be harmful because arsenic can be detected in it, but the equally clear message is that anything with any arsenic in it (i.e., vegetables and any organic matter) is harmful. This is perfect ammunition for anti-vaxxers or anti-biotech liars or all manner of anti-scientific activists.

[Random deeper dive points — please skip the next four paragraphs if you already think my posts are already too long.

There are further subcategories and cross-categories of lies, and I am not attempting anything close to a complete taxonomy. Presenting as True a claim that might be true, but is highly uncertain, is another classic tobacco control tactic. One version of this is misrepresenting a single result from a single study as a robust generalizable estimate. Of course, that is also done when there is overwhelming evidence that estimate is wrong, in which case it is simply an overt lie.

Some observers like to classify “lies of commission” versus “lies of omission”, but I find that distracting rather than useful. It roughly corresponds to the “literally false” versus “literally true” division I am emphasizing. But not quite; sometimes literally true lies involve some obvious omission of an important fact, but not always. More important, “commission” implies that other types of lies are not equally intentional acts of volition, which they are.

Not all literally false statements are lies; some are merely simplifications that are acceptable in context (“Earth is a sphere”) while a few are crafted to better communicate the practical truth than the literal truth would (a “I’m allergic to peaches” rather than “I have topical mucosal reaction which does not appear to be IgE mediated, and so is not actually an allergy, but produces the bad reaction of…”).

Not all literally false communications are statements. A question can be a lie. Or a glance. Or a lack of a glance: I had a dog who cleverly tried to lie to me about whether I had really tossed him a treat by letting it go past him without apparently knowing it; if I walked away rather than tossing him another, he would immediately turn and pick up the first one, aware of exactly where it was. More relevant, the act of citing a source in a paper can be a lie if that source is merely one piece of evidence that points toward the claim it is attached to, but is clearly not a sufficient basis for the claim.

end deeper dive]

Can something also be a lie if the person stating it actually does believe it? I would argue yes, and include other circumstances in my definition of lie for this blog. If someone has had every opportunity to learn the truth (e.g., tobacco control leaders who are confronted with the actual evidence), but they intentionally insulate themselves from it for political reasons, then their false claims (or innuendos) that contradict the truth are lies. If someone claims expertise on something that they do not really have (e.g., tobacco control’s useful idiots), but make statements as if they know what they are talking about, they are lying. Perhaps some might argue that they are only lying about their knowledge/expertise, and the statement is not itself technically a lie. But either way, they are lying. So if your layman brother says, “you should not switch to e-cigarettes — those things are worse for you than smoking”, chances are he is just a victim of propaganda, not a liar. But if @DrSJKelderMD tweets out that message, he is implicitly claiming expertise on the matter (despite also being a victim of propaganda) and that makes him a liar. (Of course, what either one of the should really be saying is “I read somewhere that…”, but that ability to accept that one’s “knowledge” is limited is beyond most people.)

Deception, not the literal truth of a statement, makes a bit of communication a lie. Some of our less insightful news media, notably including NPR News, have issued an editorial policy that they will not use the L-word to describe the claims of certain people in government because we cannot know for sure that they do not really believe the nonsense they are spouting. This is an utter fail for a couple of reasons. First, there are those I just noted: If there is clear evidence a claim is wrong, and that is available to the liar and within his ability to understand, then either he is unaware of it, making him a liar in his implicit claim he is knowledgeable about the matter, or is capable of making himself believe something he knows is not true, which is layers of liar and other pathologies of various sorts.

But there is a second, rather more important, reason why NPR et al. are wrong. It goes to the serious failure to understand the nature of knowledge that consists of putting claims in the bins {known, unknown}. If those are the categories, then everything about the material world falls into the second. This is the same error that appears in so many debates about interpreting study results — e.g., whenever you see someone (who clearly does not understand scientific knowledge) say “this is an observational study [or ‘cohort study’, or whatever] and so cannot prove causation.” No body of evidence can ever prove or let us to know (with certainty) anything about the material world. We only have degrees of confidence in a conclusion. So refusing to properly label a lie just because we do not know what is the speaker’s mind is a logic that precludes ever reporting any conclusion about anything.

So with respect to degrees of confidence that tobacco control (and “public health” more generally) statements are lies, the evidence is pretty overwhelming. They are constantly told, by people who are obviously expert, why their claims are wrong. They cannot not know. Those who genuinely cannot understand the truth are lying about being minimally competent to have an opinion on the topic.

In fairness our confidence that someone is lying should diminish as there start to be legitimate reasons someone might believe a claim. Arguably, it might also diminish if the claim is obviously wrong, but it requires serious technical expertise to understand why. But much of what “public health” claims is obviously and indisputably wrong, at a level that any undergraduate should be able to understand.

Consider especially the cases where they torture their phrasing of the lie so that their exact words are technically true. This is evidence that they actually do know the truth. Someone who asserts an obviously false statement might genuinely believe it (though still be a liar for the above reasons). Someone who carefully crafts a statement that communicates a lie, while still being able to claim that his statement does not violate the Administrative Procedures Act prohibition against false statements, is not making a mistake. Both the lie itself and the effort to keep from being sued for it are obviously intentional. That is the most evil kind of liar. There can be no question that his intent is to lie.

Of course, there is nothing novel or clever about this. Every teenager figures out how to lie with literal truths. “I got home before midnight, just like you told me to” (“…before sneaking back out”). “I was at work all evening, and did not go out partying with my friends” (“…they just dropped by at my break and we smoked weed in the parking lot”). Of course, a parent who figures out that the literal truth was really a lie is unlikely to be particularly impressed by the use of literal truth. And neither should those judging public health liars.

Finally, to Raith’s suggestion that the tendency of tobacco control to lie with literal truth seems to be increasing, I have to say no. It is, of course, impossible to really apply any precise measure to this. Which statements should we count? Every last utterance by some useful idiot, or only the more faux-authoritative statements? Who do we count as faux-authoritative? How do we weight a lie by CDC that gets reprised by a hundred idiots at local health departments versus a lie by the Truth (hahaha) Initiative that is broadcast on television? Statistics are fairly meaningless for things like this, and anyone who suggests otherwise is, well, lying. (This morning on the radio, I heard the claim that anti-semitic incidents had increased by 86% since Trump took office. Just ponder how incredibly stupid this precise statistical claim about an ill-defined and difficult-to-document collection of events is.)

That said, I think in terms of chronicling the lies of tobacco control, I am probably the best excuse for a lie-o-meter that we have. I have been carefully observing and documenting their lies and methods of lying for the course of the 21st century. My observation is that out-and-out false anti-THR lies were more common at the start of the century than they are now. But thanks to badgering by Brad Rodu, me, and a few others, the leading liars were forced to back off of those. So for a decade or more, technical truth lies have been predominant. When some novelty emerges, like they whole phenomenon of e-cigarettes or a single new junk “study”, we tend to see an increase in the literally false lies. But there is a drift back to an equilibrium where most lies are the “optical illusion” or innuendo type.

So someone who has focused only on the lies about e-cigarettes, from the time that tobacco control started lying about e-cigarettes, will probably have noticed an increase in the prevalence literal truth lies. But the mix of public health lies overall — about smoking, ETS, smokeless tobacco — has been pretty consistent. Literally true lies are the norm. Thus it is important to recognize that they are clearly worse than the literally false ones.

What is peer review really? (part 9 — it is really a crapshoot)

by Carl V Phillips

I haven’t done a Sunday Science Lesson in a while, and have not added to this series about peer review for more than two years, so here goes. (What, you thought that just because I halted two years ago I was done? Nah — I consider everything I have worked on since graduate school to be still a work in progress. Well, except for my stuff about what is and is not possible with private health insurance markets; reality and the surrounding scholarship has pretty much left that as dust. But everything else is disturbingly unresolved.) Continue reading

Real implications of the RSPH “sting” of ecig vendors

by Carl V Phillips

So apparently there is a UK organization known as the Royal Society of Public Health which presumably had some importance back when the East India Company was not just a retail brand. And apparently they did some secret shopper research of vape stores for purposes of generating publicity for themselves (like a retail brand would). The main payload was the breathless observation that half the shops did not interrogate the customer about their smoking status before selling e-cigarette products to them, and that most of the rest did not refuse to sell upon learning the faux-customer was playing the role of a nonsmoker. The RSPH then portrayed this as a violation of a largely-nonexistent “code of conduct” and managed to get this non-story all over the press in the UK.

I write that so dismissively — about the organization and the research — for a reason. This was much ado about nothing (as Paul Barnes entitled his post about it, which you can read for more of the basic details; see also Clive Bates’s post in which he dismisses it as a “cheap publicity stunt”). But despite this just being a blast of silly throwaway junk — what we observers of the public health industry simply call “Friday” — blogs and Twitter just lit up about it.

Part of the heightened reaction, as compared to all the other bullshit press releases, can be explained by the strange British obsession with anything containing the word “Royal”. Partially it is because the core failure — the notion that legal retailers of a consumer good should be policing their adult customers to see if they were “the right kind of people” — is something everyone can spot the flaws in; it is not some arcane matter of sample selection bias or confounding. So, for example, we can observe that no retailer ever checks to make sure an adult customer is a smoker before selling her cigarettes. This also makes it particularly easy and fun to satirize. (Incidentally, if a retailer in the USA discriminated in the way RSPH demands, refusing to sell to a potential customer because of his nonsmoker status, there is a good chance it could be successfully sued.)

But I also think the widespread reaction to this silliness, even among those who recognized it was just a publicity stunt, reflects a gut-level feeling that there is something more nefarious here than the surface level reaction implies. Since I try to point out the forest that is hidden among the trees (or should I use the royal “amongst” today), I will point out three deeper issues here. As I often do, I am trying to make a case that just responding to attacks like this at the level of their specific merits, even when the response is that they have no merits at all, is a fail. It is a terrible strategy to treat research like this as if it were fundamentally legitimate but merely flawed, as if it exists in isolation, and as if it were the work of decent ethical people who merely erred. That remains true no matter how effectively you eviscerate it on the surface. Continue reading

LA Times editorial about dishonest public health (ok, not really)

by Carl V Phillips

I have already noted on this page the “welcome to my world” feeling of the press and others complaining about the Trump administration, and its deluge of disinformation and dumb policy proposals, fueled by both unforgivable ignorance and ideological extremism. When newspapers and pundits complain about this, I often find myself thinking, “gee, why don’t you exercise these critical skills when you report on the issues I work on?”

Example:

(For those who might not know why that was fake news and want to learn, see this.)

Today the Los Angeles Times published a scathing critique of Trump and his administration, “Our Dishonest President”, which is getting an enormous amount of attention. Actually, I (and those tweeting that it pulls no punches) probably overstate that a bit; let’s call it “scathing by the standards of insider elites who seldom say anything that is brutally honest, no matter how much it needs to be said.” Anyway, several parts of struck me as just so “welcome to my world” that I thought maybe I should rewrite it a bit. What I came up with appears below. You might not fully appreciate just how similar to the original it is reading them serially, so I have also posted a marked up version that shows all my edits to the original.

The latter is a bit of a pain to read, of course; if you choose to read just the clean version below, know this: Every single sentence is from the original work that I am criticizing and parodying, pointing out their hypocrisy in not offering similar scrutiny elsewhere (just a little note there for LAT’s copyright enforcers :-). No sentence has been omitted. My version of every sentence maintains the theme of the original (but re-aiming it, of course).


OUR DISHONEST PUBLIC HEALTH ESTABLISHMENT

a parody and critical political commentary by Carl V Phillips, based on a work  By THE TIMES EDITORIAL BOARD

It was no secret for years that many in tobacco control and similar branches of U.S. public health are narcissists and demagogues who used fear and dishonesty to appeal to the worst in people. (The Times, however, and in common with every other major news outlet, has allowed this to pass without seriously criticizing it; it never called them unsuited for the job they do, and certainly never said that a particular policy would be a “catastrophe.”)

Still, nothing prepared us for the magnitude of this train wreck. Like millions of other Americans, we clung to a slim hope that public health activists would turn out to be all noise and bluster, or that the people around them in universities and governments would act as a check on their worst instincts, or that they would be sobered and transformed by the awesome responsibilities of influencing policies that have huge effects on people’s lives.

Instead, about two decades into the time of extreme activist public health — and who knows how much time to go before they are stopped — it is increasingly clear that those hopes were misplaced.

Public health social activists have taken dozens of real-life steps that, if they are not reversed, will continue to rip families apart, lower people’s welfare, in many cases actually harm public health, and profoundly weaken the quality of American public education.

Their attempt to ban e-cigarettes for millions of people who had finally quit smoking and, along the way, enact a massive transfer of wealth from tobacco product users to the government and cigarette manufacturers might still be stopped. But they are proceeding with his efforts to grant further arbitrary powers to the government’s regulatory agencies and bloat their budget.

These are immensely dangerous developments which threaten to weaken the moral standing of our government and real public health, imperil freedom and reverse years of slow but steady gains by marginalized or impoverished Americans. But, chilling as they are, these radically wrongheaded policy choices are not, in fact, the most frightening aspect of the ascendence of this brand of “public health”.

What is most worrisome about these people are these people themselves. They are so reckless, so petulant, so full of blind self-regard, so untethered to reality that it is impossible to know where their policies will lead or how much damage they will do to our welfare. Their obsession with fame, wealth and success, determination to vanquish enemies real and imagined, craving for adulation — these traits were, of course, at the very heart of their David-Goliath-myth outsider campaign; indeed, some of them helped them secure the power they have today. But in a real position of power, they are nothing short of disastrous.

Although the activists’ policies are, for the most part, variations on classic real public health positions, they become far more dangerous. Many people, for instance, support restrictions on where you can light-up and educational efforts to encourage healthy eating, but modern public health’s cockamamie tobacco “endgame” fantasies and impracticable campaigns to change human nature turn presumptuous and pushy policy into appalling imposition of an extremist minority view.

In the days ahead, The Times editorial board will look more closely at this, with a special attention to three troubling traits: [Editor’s note: …and so I suspect I might do this again. –CVP]

1. Shocking lack of respect for those fundamental rules and institutions on which our government and scientific community is based. They have repeatedly disparaged and challenged those entities that have threatened their agenda, stoking public distrust of essential institutions in a way that undermines faith in science and democracy. They have questioned the qualifications of scientists and the integrity of their analyses, rather than acknowledging that politics must submit to the rules of nature. They have clashed with their own honest experts, demeaned consumers and questioned the credibility of anyone who does not share their politics. They have lashed out at bloggers and other journalists, declaring them “industry shills,” rather than defending the importance of a critical, independent free press. Their contempt for the rule of law and the norms of government are palpable.

2. Utter lack of regard for truth. Whether it is the easily disprovable boasts about the miraculous effects of smoking bans or the unsubstantiated assertion that soda taxes improve health, they regularly muddy the waters of fact and fiction. It’s difficult to know whether they actually can’t distinguish the real from the unreal — or whether they intentionally conflate the two to befuddle the public, deflect criticism and undermine the very idea of objective truth. Whatever the explanation, they are encouraging Americans to reject facts, to disrespect science, documents, nonpartisanship and the mainstream media — and instead to simply take positions on the basis of ideology and preconceived notions. This is a recipe for a science-free power struggle in which differences grow deeper and rational compromise becomes impossible.

3. Scary willingness to repeat conspiracy theories, misleading memes and crackpot, out-of-the-mainstream ideas. Again, it is not clear whether they believe them or merely use them. But to cling to disproven “alternative” facts; to retweet celebrities with nothing useful to contribute; to make unverifiable or false statements; to buy into discredited conspiracy theories first floated on fringe websites and in deranged University of California blogs — these are all of a piece with antivax or miracle-cure claptrap that, but for some quirk of fate, these same individuals might now be peddling to come to political prominence. It is deeply alarming that a supposedly respectable and science-based movement would lend credibility to ideas that have been rightly rejected by every honest expert who has looked closely.

Where will this end? Will public health moderate their crazier positions as time passes? Or will they provoke a permanently destructive loss of faith in real science and health advocacy? Or, alternately, will the system itself protect us from them as they alienate more and more allies, step on their own message and create chaos at the expense of real public health goals? Already, the approval rating for their policies, among people who really know about them and the alternatives, has been hovering in the mid-30s, a shockingly low level of support for rules that are ostensibly intended to benefit the public. And that was before the new “war on sugar” that is bleeding over from the UK.

Fifteen years ago, it was not yet time to declare a state of “wholesale panic” or to call for blanket “non-cooperation” with the public health activists. Despite plenty of dispiriting signals, that is still our view. The role of the rational opposition is to stand up for the rule of law, the scientific process, and the role of institutions; we should not underestimate the resiliency of a system in which laws are greater than individuals and voters are as powerful as presidents. This nation survived John Harvey Kellogg and Carrie Nation. It survived bloodletting therapy. It survived Prohibition. Most likely, it will survive again.

But if it is to do so, those who oppose the reckless and heartless agenda must make their voices heard. Protesters must raise their banners. Voters must turn out for state and local hearings. Members of Congress must find the political courage to stand up to them. Courts must safeguard individual liberties. State legislators must pass laws to protect their citizens from meddling. All of us who are in the business of holding leaders accountable must redouble our efforts to defend the truth from their cynical assaults.

Science-based policy is not perfect, and it has a great distance to go before it fully achieves its goals. But preserving what works and defending the rules and values on which it depends are a shared responsibility. Everybody has a role to play in this drama.

Editors of Tobacco Control attack blogs: protecting science from cranks, or activism from science?

 

by Roberto A Sussman

[Editor’s Note: This post is the third here on the recent Tobacco Control editorial. The first two, by me, are here and here, and I plan to cap it off with a fourth next week. This guest post was inspired by a comment Dr. Sussman left on one of the previous posts. His outsider perspective, from physics, offers insight that may not be apparent to those of us mired in social science and health debates, and he provides a deeper dive into the stated policies of the “journal” than anyone else has done. –CVP]

In a recent statement of editorial policy the editors of the journal Tobacco Control declared that the journal’s “Rapid Response” section will be henceforth the only legitimate space to express a scientific critique of articles published by the journal. In particular, the editors singled out (unnamed) internet bloggers as illegitimate critics.

This editorial policy reads as an unnecessarily harsh and defensive reaction, as scientific debate in all fields has never been narrowly confined to peer-reviewed journals, and more so in the current age of broad internet usage and social media. Moderated internet sites (such as the Los Alamos National Laboratory LANL arXiv site) have become a regular and very handy communication channel in physical and mathematical sciences and are fully as serious as journals; researchers can upload material not yet published in a journal (under review), or not intended to be published in a journal, to induce an open discussion of fresh (even controversial or unorthodox) ideas without the constraints of the formal review process. Blogs and Facebook pages exist in all disciplines that serve as useful complementary spaces where research issues can be discussed either informally, or with varying degrees of rigor, mostly involving scientists and graduate students, but also educated non-scientists that may be interested. Besides all these points, publication in peer-reviewed journals is not a guarantee of solid or good quality research, as many peer-reviewed articles in “official” journals report false, methodologically inconsistent, or dishonest results.

However, some forms of “unofficial” critique are neither valuable nor useful. Scientists make an effort to avoid and exclude cranks and crackpots voicing (mostly in social media) all sorts of critical opinions on various scientific topics (especially politically controversial ones). Typically, these characters cleverly juggle (out of context) technical terminology to produce theoretical constructions that may fool lay persons, but are easily seen as incoherent nonsense by any professional researcher (or even a competent undergraduate student). As a common feature they deflect criticism by invoking conspiracies directed by some “scientific establishment” bent on silencing them. As a professional scientist (specialized in theoretical astrophysics and cosmology), I can recall very frustrating experiences involving encounters with this type of non-scientific critics. I have also engaged creationists and “UFO-logists” in front of non-scientific audiences, and have learned the hard way that debating scientific issues requires proper rules of engagement and proper spaces (which does not exclude blogs). Without the appropriate environment and moderation, scientific arguments (even if expressed in non-technical manner) cannot compete with “punchlines” or quick soundbites and analogies.

Medical sciences are not immune to science trolling, as can be witnessed by the efforts of groups like ACSH (American Council of Science and Health) to expose all sorts of doubtful health claims promoted by fad peddlers and cranks writing in social media. This type of science trolling about medical issues has more direct and significant social impact and consequences than in physics. Statements promoting well-being, or warning against terrible ills that would follow automatically from some diet or substance consumption or from adopting a new habit, have an immediate practical impact for those accepting them as true or plausible. We have fallacies potentially producing immediate behavior patterns. By contrast, a cranky statement from physics trolling, such as “a black hole emerging from the Large Hadron Collider (LHC) may cause a great planetary catastrophe”, sounds distant and abstract even to those understanding or believing it. After all, whether one believes it or not, there is no practical course of action to prevent the whole earth from being carved out by a massive black hole, but for those believing that diet X cures cancer, adopting and promoting this diet is concrete and doable. The apparent fallaciousness of such a claim (i.e. diet X does not appear to cure cancer) can only be verified by looking at data-based statistics after decades of observation. It is very unlikely that the lay public will follow up the long-term epidemiological studies. As a consequence, large sections of the public may keep believing fallacious health claims (especially if propagated by wide media coverage) and those propagating it are very likely able to get away with it (especially if well connected politically). On the other hand, cranky predictions from physics trolling tend to be rapidly disproven and forgotten: no planetary catastrophe happened when the LHC started functioning.

While the disinformation propagated by science trolling and the peskiness of some social media crackpots are very disturbing, these phenomena can not serve as reasons to decree a strict enclosure of all scientific discourse and debate within the walls of academic journals. Even if we assume that the editors of Tobacco Control (and other scientists) could be legitimately annoyed by cranky “outsider” critics writing in social media and blogs, their editorial is an evident over-reaction. Normally, scientific journal editors would not bother expressing a forceful editorial policy based on declaring war on this type of science trolling. The latter is simply and unceremoniously filtered out of the scientific debate without constraining the discussion and critique to strict officialdom.

The key issue is to understand what lies behind this overreaction is to ask: Are the bloggers that annoy the Tobacco Control editors part of the legion of social medial cranks that pester scientists in various disciplines? To answer this question we need to examine the material posted by these bloggers. If this material is worthless inconsistent nonsense disguised as technical criticism, then the Tobacco Control editors may have a point (even if they exaggerate). But if this material is valuable and methodologically sound criticism, then the defensive reaction from the editors would likely follow from their inability to disprove them within the rules of scientific debate. To address these questions we also need to understand the specifics of the Tobacco Control journal and the research it publishes, as well as the motivations and backgrounds of the critical bloggers and the material they post.

To the external eye, Tobacco Control looks like an ordinary scientific journal: it has an editorial board of professors; its contributors are PhD’s and other credentialed researchers working (mostly) in academic or government environments, receiving public and industry (pharmaceutical) grants; it undertakes a formal peer-reviewing process; it includes a rapid comments section; etc. This looks like any journal in other disciplines.

However, this resemblance is a deceptive illusion based on common external markings and trappings. Tobacco Control is not a proper scientific research journal that serves a real academic community. It is a journal for a loose alliance of academics and regulators (mostly, physicians, lawyers and other non-scientists) whose main task is to advocate and promote a specific tobacco regulation policy with the aim of eradicating tobacco and nicotine usage.

The advancement of the policy strategy is paramount for the journal and is not open to debate, with the “science” part and related technical aspects in the research it publishes being strictly confined to tactical issues subservient to their potential utility in this advocacy. This characterization requires no secret knowledge. A glance at the recommendations to prospective authors of articles to be published by the journal clearly and openly states its research orientation and strict priority:

The principal concern of Tobacco Control is to provide a forum for research, analysis, commentary, and debate on policies, programmes, and strategies that are likely to further the objectives of a comprehensive tobacco control policy. In papers submitted for review the introduction should indicate why the research reported or issues discussed are important in terms of controlling tobacco use, and the discussion section should include an analysis of how the research reported contributes to tobacco control objectives.

In fact, prospective authors are explicitly discouraged from submitting articles which may contain potentially valuable scientific material but have no direct effect on the advancement of the core policy strategy. From their list of papers they are not interested in:

Papers that show the authors have never opened Tobacco Control and do not understand its primary focus on tobacco control rather than on tobacco and its use and health consequences. We are interested in such papers, but only if their authors address the implications of their findings for tobacco control.

While it may be argued that most research is (or could be) connected to some type of social activism that could have some public policy implications or to other type of social or political “extra-science” concerns, no journal I know of in other disciplines (not even in the politically contentious climate change issue) functions with such a strict focus and dependency on advocacy and a specific political agenda. This renders Tobacco Control primarily an activist broadside that acts as a travesty of a science journal.

To illustrate how the science part of Tobacco Control is just skin coverage to a particular advocacy position, we need to examine what lies beneath this skin. I elaborate below on this issue.

Practically all the published articles in Tobacco Control present research that fully complies and completely agrees with the elements that justify the regulatory agenda that defines the journal. This lack of disagreement on core technical issues signals a sort of inbuilt monolithic alignment that one expects to find among echo chambers of political activists or dogmatic sects, but is quite suspicious and uncommon in all fields of science where dissent on core issues occurs and is voiced (of course, I do not mean crackpot dissent, but dissent within the rules and bounds of scientific activity).

Perhaps editors or contributors of Tobacco Control might argue that this unanimity is justified because the “hard science” behind their strategic policy “has been settled”, and thus disputing the policy would imply a “flat earth attitude” based on questioning well established rock solid scientific research. However, this is a clear fallacy: there is no factual basis in the assumption that health science has fully resolved all tobacco related issues and thus has become cast in stone. There is strong evidence (epidemiological and physiological) on high health risks and hazards from primary cigarette smoking, but many open problems still remain to be researched, and evidence is weak or even contradictory (i.e. science is far from “settled”) on other related issues, such as health risks from environmental tobacco smoke (ETS) or from other tobacco and nicotine delivery products (smokeless tobacco or electronic cigarettes). These issues, especially harm from ETS exposure, remain controversial, and thus must be open to debate. A rigid set of policy recommendations on these issues has questionable scientific basis. The unanimity on core issues proclaimed by the Tobacco Control journal bears much more resemblance to “toeing the party line” in a political or ideological agenda than endorsing science.

Another issue that reveals the skin depth of the scientific part of Tobacco Control is the technical sloppiness (and in some case outright methodologically fatal flaws) of many articles published in the journal. Some might think that it is necessary to be a trained health professional to properly appreciate and evaluate the technical aspects of medical research on tobacco that could justify a regulatory policy. This is not so. While expert analysis of clinical issues and diagnosis and treatment might require medical or health science training, most articles published in Tobacco Control rely on results of epidemiological research that can be well understood (at a core level) by any professional possessing a decent training in statistics and some knowledge of social science methodology. Also, professionals with a decent knowledge of the physics and chemistry of gases and aerosols can similarly evaluate issues related to putative harms from ETS and e-cigarette vapor.

There are many examples of methodologically deficient articles published by Tobacco Control. In particular, I cite two studies published recently that contain fatal flaws:  (i) a 2016 study claiming to have detected a 11% decrease in heart attacks in Sao Paulo, Brazil, immediately after the enactment of a city-wide smoking ban on bars and restaurants and (ii) a study claiming that usage of e-cigarettes is a “gateway” to smoking among high school students in the USA. In both cases the data was handled very sloppily and the results blatantly contradict available evidence. Nevertheless, they got published, which implies that either: (a) the editors and peer reviewers were utterly incompetent, or (b) that technical quality and methodological consistency are secondary concerns when the prospective articles are deemed by the editors to provide a significant contribution to the journal’s main concern: the regulatory agenda. In fact, (a) and (b) above are not necessarily mutually exclusive.

Articles dealing with tobacco/nicotine issues with similar themes and fatal methodological flaws have appeared in other journals. The Sao Paulo study is a sort of sequel to the famous “Helena miracle” study published in the BMJ flagship journal (same publisher as Tobacco Control), which has been widely criticised and debunked (example), whereas the study on teenage vaping fits the pattern of another study published in Lancet Respiratory Medicine, which was also heavily criticised (example). Both of these studies are co-authored by known anti-tobacco activist and prolific contributor to research on ETS and tobacco issues in medical journals, Prof Stanton Glantz (the Truth Initiative Distinguished Professor at The Center for Tobacco Control Research and Education at UCSF). These patterns clearly illustrate the fact that the advancement of the regulatory policy as a paramount concern that even supersedes quality control in methodological consistency, is not confined to the Tobacco Control journal, but extends to the whole cabal formed by the vast majority of public health researchers publishing in journals articles that deal with tobacco issues that may have implications in regulation policies.

It can be argued that technical flaws, such as sloppiness in handling data and statistical hodgepodge to obtain outcomes favouring funders’ preferred conclusions, are not confined to Tobacco Control and similar journals involved in researching tobacco/nicotine issues, but are common drawbacks in other disciplines as well (especially in various branches of health sciences).  However, the credibility of scientific research is undermined even more when, besides these drawbacks, journals (such as Tobacco Control) themselves gauge and evaluate research results by their utility for advocating a specific regulatory policy. Since the latter is endorsed and implemented globally at the highest bureaucratic and government levels, authors of such flawed studies are basically free from scrutiny and are thus more than willing to publish any research that favours their advocacy even if it contains extremely misleading and false results.

Articles that exhibit this type of scandalous level of faulty methodology would never be published in my research area. This does not imply that erroneous or false (or even fraudulent) results are never published by physics journals. But once proven wrong or debunked, the authors and journals acknowledge the faults. Two years ago data the BICEPS2 observations seemed to have found a weak signal providing indirect evidence of tensor modes associated to gravitational waves that could have been produced during cosmic inflation. If verified, this signal would have been the first empiric proof of the inflationary hypothesis and a strong indication for the existence of gravitational waves (thus further corroborating General Relativity theory). However, it turned out that the handling of the BICEPS2 data had been sloppy, that the data was corrupted by Milky Way dust, whose noise completely buried the detected weak signal. In contrast with medical journals refusing to withdraw health claims on tobacco/nicotine related issues that were later debunked, the BICEPS2 claim was immediately withdrawn by all involved researchers and journals.

Now, what about the bloggers that the editors of Tobacco Control wish to excommunicate? Are they science trolls? The social media blogs that criticize articles appearing in the Tobacco Control journal (and similar journals) are quite diverse, with perhaps their single common feature being their opposition to the type of tobacco and e-cigarette regulation that is aggressively advocated by these journals.

Some of the blogs represent the vaping community and some claim to speak for smokers and vapers. Others are more broadly libertarian. Some of them argue the case for the tobacco harm reduction (THR) approach, even intensively promoting vaping or smokeless tobacco as a substitute for cigarette smoking, while others adopt a pragmatic approach that supports THR without campaigning against combustible tobacco. Some of these blogs are scholarly defenders of science. Some are not scholarly, but aim to provide a voice for a community of smokers, smokeless tobacco users, and vapers that actually enjoy using the products and feel personally affected by the social stigma produced by the intrusive bans that follow from the policy recommendations.

These bloggers, as well as most readers commenting on their posts, may be critical but are not on denial of the health risks from smoking, particularly cigarette smoking. As far as I can tell, very few of the bloggers and readers  advocate the return to the old days when smoking was almost unregulated and allowed everywhere. Instead, all  bloggers and readers express a generalised desire for a more humane regulation of tobacco smoking (and now of vaping), with the right of nonsmokers to smoke-free environments being respected, but also demanding that smokers (and vapers) must be  able to enjoy public indoor spaces where they can smoke/vape without being shamed and vilified by “denormalization” policies. Bloggers and readers comment how such policies are promoted by a global conjunction of increasingly authoritarian public health lobbies and charities, whose aims are perceived to lie far from a genuine public health concern, and are more about the preservation of their bureaucratic power (the “gravy train”), with many of them having intimate financial ties with the pharmaceutical industry.

Some of the blogs are quite scholarly (some are run by experienced scientists) and do provide, together with useful verifiable information, a solid reasoned criticism of the loose methodology prevalent in the research published in Tobacco Control and other health journals. In fact, all the methodological flaws I mentioned before, the faulty meta-analysis — the statistic hodgepodge, the “Helena miracle” claims, the mishandling of the data, the simplistic “addiction” theory, the dismissal of previous results not aligning with the agenda — have been extensively and rigorously discussed in the pages of these scholarly blogs. While most blogs (even the scholarly ones) tend to avoid the dry cauterised style full of technical terms found in published journals, favoring a more colloquial, but well-articulated style amenable for an open and broad audience, a lot of the material appearing in the scholarly blogs could easily meet (after some editing and style changes) the methodological standard of quality that merits publication in a scientific journal.

These scholarly bloggers actually provide a very fresh and healthy counterbalance to the “official” tobacco/nicotine research published in academic research, which is excessively constrained by global public health politics and by the vested interests of the pharmaceutical industry. In particular, they promote varied proposals of a new regulatory paradigm based on THR to replace the policies trying to enforce the “abstinence only” approach. While the bloggers are certainly not beyond criticism (and some may tend to become too self-centered and too defensive), they are absolutely not (not even remotely) comparable to crackpots or science trolls. In fact, these bloggers provide the necessary and refreshing debate and exchange of ideas that could prevent the science on tobacco/nicotine issues from becoming practically indistinguishable from quasi-religious dogma.

Controversy on core issues and the challenging of dominant paradigms occur naturally  in every scientific discipline: there is no reason why this should not occur in public health science. In fact, part of the community of public health scientists has resonated with the criticism expressed by the scholarly bloggers, agreeing (with various degrees of consistency and conviction) with them on various proposals for shifting regulatory policies towards a THR approach. To claim (as a lot of official tobacco scientists do) that all this wide spectrum of voices criticising the dominant politics are mere fronts of the maligned tobacco industry is a ridiculous libel that can easily be disproved.

It is clear beyond doubt that the harsh defensive reaction of the editors of the Tobacco Control journal stems from their inability to acknowledge serious technical errors that the bloggers they would like to excommunicate have spotted. These editors are exploiting the fact that, externally, their niche (a journal whose editors and contributors are credentialed academics) resembles the niche of other scientific journals, while the bloggers (even if posting valuable material) are outside these “official” channels. The hope of the Tobacco Control editors is to secure, by association, the professional authority of journals in other sciences and that this will help them to deflect the bloggers’ criticism.

The tactic of the editors of the Tobacco Control journal is then evident: to identify all their critics, but especially those writing in scholarly blogs, with the social media crackpots that besiege scientists in other disciplines. Their editorial is an attempt to utilize their external resemblance to a real research journal, serving real academic communities, for this purpose. Their target audiences are: first, the media, the politicians and the medical community who can implement the policies they advocate; second, the public health authorities and other academic communities (which would identify with them because of the superficial resemblance); and third, the lay people, who are completely unaware of the inner workings of scientific activity and simply assume that somebody like Prof Stanton Glantz, a co-author of the fraudulent “Helena miracle” (to use a well-known example), is as good a scientist as any other.

It goes without saying that the dominant majority of public health researchers involved in tobacco/nicotine research are acting with gross dishonesty when they paint themselves as bona fide scientists besieged by social media cranks or “Big Tobacco” front. Neither Prof Glantz nor any other prominent individual in this cabal has ever disavowed the most extreme pieces of tobacco junk science published in journals — for example, the claim that minutes of outdoor exposure to ETS produce coronary disease, or the existence of “third hand smoke” (health harms somehow resulting  from tobacco smoke residue in rugs and walls where someone smoked). The claims from such pieces of published third-rate junk science are at the same level of science trolling as the writings of cranks in social media. There is little difference between the “third hand smoke” claim, which treats tobacco smoke as a sort of quasi-magical substance that is lethal even in extremely minute dosage, and quasi-witchcraft statements by a social media freak naturist sect announcing that wearing a pyramidal magnetic amulet around the neck protects from cancer. Yet the naturist sects do not claim patronage from science, whereas this type of officially published ultra-junk science does. For this reason, the latter is much more harmful socially than the former.  

The identification of Tobacco Control critics with crackpots besieging scientists may backfire, as it can easily be shown to be false simply by reading through the pages of the scholarly blogs and comparing with the pages of the journal that they criticize. Not even the non-scholarly blogs and their readers can be tagged as trolls, as (in general) they avoid the extreme abuse seen among social media trolls. In fact, anybody having tried to debate extreme or neurotic anti-smokers (whether laypersons or physicians) rapidly discovers that expressing any doubt or nuance on the usual soundbites, such as “second hand smoking kills” or “you have no right to force your filthy habit on me”, or various forms of “protect the children” demagogy, is met by ad hominem, angry denials and abusive language. A large minority of anti-smokers in all walks of life are very prejudiced individuals whose attitudes to smokers are no different from attitudes of racists and homophobes towards their hate targets. In fact, anonymous anti-smokers in social media exhibit all the unpleasant features of internet trolls and crackpots: dogmatic belief in possessing absolute truths together with invoking conspiracies (the tobacco industry luring “kids” to become nicotine addicts). Unfortunately, some academics that publish on tobacco issues in official journals espouse the same type of cranky troll-level ideas, just expressed in polite technical terms.

Evidently, the editors of the Tobacco Control journal are trying to mobilise the medical-political bureaucracy and charities that share their anti-tobacco/anti-nicotine advocacy. The aim attached to their recent editorial is to pin all its critics (especially scholarly blogs) with the crackpot label, as the old “tobacco industry mole” label is no longer credible. They may succeed, but nevertheless, the label is deceptive. Sooner or later most people will realise it and admit that “the king is naked”.