Category Archives: truths

discussions of how to best present the truth

Sunday Science Lesson: Debunking the claim that only 16,000 smokers switched to vaping (England, 2014)

by Carl V Phillips

When this journal letter (i.e., short paper), “Estimating the population impact of e-cigarettes on smoking cessation in England” by Robert West, Lion Shahab, and Jamie Brown came out last year, most of us said “wait, wot?” The authors estimated that in 2014, about 16,000 English smokers became ex-smokers because of e-cigarettes (a secondary analysis offered 22,000 as an alternative estimate). But that year saw an increase of about 160,000 ex-smokers who were vapers in the UK (the year-over-year increase for 2015 versus 2014) according to official statistics. In addition, there were about 170,000 more ex-smokers who identified as former vapers. Since the latter number subtracts from the number of ex-smokers who are vapers in 2015 they need to be added back. So it appears that the year-over-year increase in English ever-vapers among ex-smokers appears to be nearly 200,000, after roughly adjusting for the different populations (England is 80% of the UK population). Thus West et al. are claiming, in effect, that the vast majority of people who went from smoking to vaping did not quit smoking because of vaping.

My calculation is rough, and for several reasons it may be a bit high (e.g., the measured points in 2015 and 2014 demarcate a year that falls slightly later in calendar time than 2014 itself, and the rate of vaping initiation was increasing over time). But we are still talking about well over 100,000 new ex-smoker vapers. Probably closer to 200,000. So this would mean that about 90% of new ex-smoker vapers either would have quit smoking that year even without vaping, had quit tobacco entirely and only later took up vaping, or are not “real quitters” (i.e., they were destined to start smoking again before they would “count” as having quit, which is not a well-defined definition, but the authors seem to use one year as the cutoff). This seems rather implausible, to say the least.

This is an extraordinary claim on its face given what we know about the advantages of quitting by switching, and more so given that more detailed surveys of vapers (example) show almost all respondents believe they would still be smoking had they not found e-cigarettes. It must be noted that most respondents to those surveys are self-selected vaping enthusiasts who differ from the average new vaper, and that a few of them might be wrong and would have quit anyway. But the disconnect is still far too great for West’s weak analysis (really, assumptions) to come close to explaining.

I never bothered to comment on the paper at the time it came out because the methodology was so weak and the result so implausible that I did not think anyone would take it seriously. But the tobacco wars seldom meet a bit of junk science they do not like. In this case, Clive Bates asked me to examine the claim (and contributed some suggestions on this analysis and post) because some tobacco controllers have taken to saying “e-cigarettes only caused only 16,000 people to quit smoking in England! so we should just prohibit people from using them!”

The proper responses to this absurd assessment and demand, in order of importance, are:

  1. It would not matter if they caused no one to quit smoking. It is a violation of the most fundamental human rights to use police powers to prohibit people from vaping if they want to. People have a right to decide what to do with their bodies. Moreover, in this particular case, you cannot even make the usual drug war claims that users of the product are driven out of their minds and do not understand the risks and the horrible path they will be drawn down: Vaping is approximately harmless, most people overestimate the risks, and it leads to no horrible path. It is outlandish — frankly, evil — to presume unto oneself the authority to deny people this choice.
  2. But even if you do not care about human rights and only care about health outcomes or whatever “public health” people claim to care about, causing a “mere” 16,000 English smokers to quit, annually,) is quite the accomplishment. There is no plausible basis for claiming any recent tobacco control policy has done as much. Since there is no measurable downside, this is still a positive. Also, the rate of switching probably could be increased further with sensible policies and truthful communication of relative risks.
  3. The rough back-of-the-envelope approach used in the paper could never provide a precise point estimate even if the inputs were optimally chosen. But the inputs were not well chosen. The analysis included errors that led to a clear underestimate. When a back-of-the-envelope result contradicts a reality check, we should assume that reality got it right.

So I am taking up here what is really a tertiary point.

Back of the envelope calculations

West et al. carried out a back-of-the-envelope calculation, a simple calculation based on convenient approximations that is intended to produce a quick rough estimate. It happens to have glaring errors, but I will come back to those. Crude back-of-the-envelope calculations have real value policy analysis. I taught students this for years. In my experience, when there is a “debate” about the comparative costs and benefits of a policy proposal, at least half the time a quick simple calculation show that one is greater than the other by an order of magnitude. The simple estimate can illustrate that the debate is purely a result of hidden agendas or profound ignorance, and also eliminate the waste of unnecessary efforts to make precise calculations.

When doing such an analysis, it is ideal if you get the same result even if you make every possible error as “conservative” as is plausible (i.e., in the direction that favors the losing side of the comparison). West’s analysis would thus be useful if it were presented as follows: “Some people suggest that the health cost from vaping experienced by new vapers outweighs the reduction in the health cost from smoking cessation that vaping causes. Even if we assume that vaping is 3% as harmful as smoking, the total health risk of additional vapers (the annual increase) would be the order of equivalent of the risk for about 5000 smokers. Our extremely conservative calculation yields in the order of 20,000 smokers quitting as a result of vaping. So even with extreme assumptions, the net health effect is clearly positive.”

But the authors did not claim to be offering an extremely conservative underestimate for purposes of doing such a calculation. They implicitly claimed to be providing a viable point estimate. And that requires a more robust analysis rather than rough-cuts, and best point estimates rather than worst-case scenarios. It also requires a reality check about what would have to be true if the ultimate estimate were true, namely that almost everyone who switched from smoking to vaping did not stop smoking because of vaping.

West’s estimation based on self-identified quit attempts

The crux of their calculation is the following: Their surveys estimate that 900,000 smokers self-identify as having attempted to quit smoking using e-cigarettes (please read this and similar statistics with an implicit “in this population, during this period” and I will stop interjecting it). They then assume that 2.5% of them actually did quit smoking because of e-cigarettes.

Where does the 2.5% come from? It is cited to, and seems to be based mainly on, the results of the clinical trials where some smokers were assigned to try a particular regimen of e-cigarettes; the 2.5% is an estimate of the rate at which they quit smoking above those assigned to a different protocol.

Before addressing the problems with using trial results, the second paper they cite as a basis for the 2.5% figure is one by their research group. How they got from that paper’s results to 2.5% is unfathomable. That paper was a retrospective study of people who had tried to quit smoking using various methods and found that those reporting using e-cigarettes were successful about 20% of the time, which beat out the two alternatives (unaided and NRT) by 5 and 10 percentage points. If they had used ~20% instead of ~2% their final result would have been up in the range that would have passed the reality check. So what were they thinking?

I cannot be certain, but am pretty sure. It appears they only looked at differences in cessation rates and not the absolute rates, so the 5 or 10 rather than the full 20. Several things they wrote make it clear this is how they were thinking. This is one of several fatal flaws in their analysis. There are two main pathways via which e-cigarettes can cause someone to quit smoking (which means it would not have happened without them): E-cigarette use can cause a quit attempt to be successful when that same quit attempt would not have otherwise been successful, or it can cause a quit attempt (ultimately successful) that would not have otherwise happened. West et al. are pretty clearly assuming that the second of these never happens. I am guessing that the authors did not even understand they were making a huge — and clearly incorrect — assumption here.

Causing quit attempts is a large portion of cases where e-cigarettes caused smoking cessation. Indeed in my CASAA survey of vapers (not representative of all vapers, but a starting point), 11% of the respondents were “accidental quitters”, smokers who were not even actively pursuing smoking cessation, but who tried e-cigarettes and were so enamoured that they switched anyway. Add to these the smokers who had vague intentions of quitting but only made a concerted effort thanks to e-cigarettes and probably about half of all quit attempts using e-cigarettes do not replace a quit attempt using another method. So if half the 900,000 made the quit attempt because of e-cigarettes and 20% succeeded, we have, right there, a number that is consistent with the reality check I proposed.

Of course they did not use that 20%, and it does seem too high. What they did was assume that 5% would have succeeded in an unaided quit attempt without e-cigarettes — and all the same people would have made that attempt — and so 7.5% (5%+2.5%) actually succeeded when using e-cigarettes. But if half never would have made that attempt then a full 7.5% of them should be counted as being caused to quit by e-cigarettes, which more than doubles the final result (“more than” because their final subtraction, below, would not double but should actually be reduced).

As for why they did not use that 20%, I suspect (though they do not say) that when looking at the numbers from that paper, West et al. focused not only on the differences (the error I just discussed) but on the “adjusted” rates of how much more effective e-cigarettes were than the other methods, which were considerably lower than the numbers I quoted from the paper above. This too is an error. Public health researchers think of “adjusting” (attempting to control for confounding) as something you just do, a magical ritual that always makes your result better. This perception is false for many reasons, but a particularly glaring one in this case: The adjusted number is basically the measure of how helpful e-cigarettes would have been, on average, if those who tried to switch to them had the same demographics as smokers using other cessation methods. Smokers who try to switch to e-cigarettes have demographics that predict they are more likely to succeed in switching than the average smoker. Of course they do! People know themselves (a fact that seems to elude public health researchers). The ones who tried switching were who they were; they were not a random cross-section of smokers. So it seems that West et al. effectively said “pretend that instead of self-selecting for greater average success, those who tried to switch were chosen at random, and instead of using the success rate for the people who actually made that choice, we will use instead the number that would have been true if they were random.”

[Caveat: The attempt to control for confounding could also correct for the switchers having characteristics that make them more likely to succeed in quitting no matter what method they tried. So some of the “adjustment” is valid — but only for those who would have tried anyway — but much of it is not.]

Clinical trials

That last point relates closely to the other “evidence” that was cited as a basis for that 2.5% figure, and appears to have dominated it: the clinical trials.

Clinical trials of smoking cessation are useless for measuring real-world effects of particular strategies when they are chosen by free-living people. At best they measure the effects of clinical interventions. But in this case, these rigid protocols are not even a good measure of the effect of real-world clinical interventions in which smoking cessation counselors try to most effectively promote e-cigarettes by meeting people where they are and making adjustments for each individual. I have previously discussed this extensively.

A common criticism that the trials directed subjects toward relatively low-quality e-cigarettes. That is one problem. More important, the trials and did not mimic the social support that would come from, say, a friend who quit smoking using e-cigarettes and is offering advice and guidance. The inflexibility of trials does not resemble the real-world process of trying, learning, improving, asking, and optimizing that real-world decision entail. Clinical trials are designed to measure biological effects (and even then they have problems), not complex consumer choices.

But it is actually even worse than that. A common failing in epidemiology is not having a clue about what survey respondents really mean when they answer questions. There is no validation step in surveys where pilot subjects are given an open-ended debriefing of how they interpreted a question and what they really meant by their answer. (I always do that with my surveys, but I am rather unusual.) So consider what a negative response to “tried to quit smoking with e-cigarettes” really means. If a friend shoved an e-cigarette into a smoker’s hand and said “you should try this”, but she refused to even try it, she would undoubtedly not say she tried to quit smoking with e-cigarettes. But in a clinical trial, if that were her assignment, she would be counted among those who used e-cigarettes to try quitting, thus pulling down the success rate.

If she tried the e-cigarette that was thrust at her, but did not find it promising, chances are that in a survey she would probably not say she tried quitting using e-cigarettes. (She might, but given the lack of any reporting about piloting and validation of these survey instruments, we can only guess how likely that is.) If she passed that first hurdle, of not rejecting e-cigarettes straightaway, but used them sometimes for a few days or weeks, she might or might not say she tried quitting using e-cigarettes. But if she actually quit using e-cigarettes, she would undoubtedly count herself among those who tried to quit using e-cigarettes. I trust you see the problem.

It is the same problem that is common in epidemiology when you read, say, that 20% of the people who got a particular infection died from it. This usually means that 20% of the people who got sick enough from it to present for medical care and get diagnosed died, but countless others had mild or even asymptomatic infections. Everyone in the numerator (died in this case, quit in the case of e-cigarettes) is counted but an unknown and probably very large portion of those in the denominator (got the infection, were encouraged to try an e-cigarette) are not. Clinical trial results are (at best) analogous to the percentage you would get if did antibody tests in the population to really identify who got the infection. This turns out to be the right way to measure the percentage of infected who die. But then if you the applied that percentage to the portion who presented for medical treatment, you would be underestimating the number of them who would die. That is basically what West et al. did. Their 900,000 are those for whom e-cigarettes seemed promising enough to be worth seriously trying as an alternative, but they applied a rate of success that was (again, at best) a measure of the effect on everyone, including those who did not consider them promising enough to try.

This would be a fatal flaw in West’s approach even if the trials represented optimal e-cigarette interventions, providing many options among optimal products, and the hand-holding that would be offered by a knowledgeable friend, vape shop, or a genuine smoking cessation counseling efforts. They did not, and so underestimated even what they might have been able to measure.

Final step

As a final step, West et al’s approach debits e-cigarettes with an estimated decrease in the use of other smoking cessation methods caused by those who tried e-cigarettes instead. These are the methods that are believed to further increase the cessation rate above the unaided quitting that West debited across the board (the major error discussed above). We can set aside deeper points about whether estimates of the effects of these methods, created almost entirely by people whose careers are devoted to encouraging these methods, are worth anything. West et al. assume that those methods would have had average effectiveness had they been tried by those who instead chose vaping. They also still assume that every switching attempt would have been replaced by another quit attempt in the absence of e-cigarettes, as discussed above. This lowers their estimate from 22,000 to the 16,000. But a large portion of smokers who quit using e-cigarettes do so after trying many or all of those other methods, often repeatedly. Assuming those methods would have often miraculously been successful if tried one more time makes little sense.

As a related point that further illustrates the problems with their previous steps, recall that the 2.5% is their smoking cessation rate in excess of that of those who tried unaided quitting or some equivalently effective protocol. But it seems very likely that the average smoker who tries to switch to e-cigarettes has already had worse success with that other protocol than has the average volunteer for a cessation trial. This is the “I tried everything else, but then I discovered vaping” story. I am aware of no good estimate for this disparity, but if the average smoker who tried to switch were merely 1 percentage point less likely than average to succeed with the other protocol (e.g., because she already knew that it did not work for her), then the multiplier should have been 3.5% (7.5%-4% rather than 7.5%-5%). This is trivial compared to the error of using the incredibly low estimated success rate suggested by the trials in the first place, of course, but that little difference alone would have increased West’s estimate by 40%. This illustrates just how unstable and dependent on hidden assumptions that estimate is, even apart from the major errors.

Returning to the reality check

But lest we get lost in the details, the crux is still that West implicitly concluded that the vast majority of those who switched from smoking to vaping did not quit smoking because of vaping. The authors never reflect on how that could possibly be the case. They do, however, offer an alternative analysis, in what are effectively the footnotes, that gives the illusion of responding to this problem without actually doing so. They write:

The figure of approximately 16,000–22,000 is much lower than the population estimates of e-cigarette users who have stopped smoking (approximately 560,000 in England at the last count, according to the Smoking Toolkit Study). However, the reason for this can be understood from the following….

What follows is even weirder than their main analysis.

West’s “alternative” analysis

They actually start with that 560,000. That is inexplicable since it is possible to estimate the year-over-year change in 2014, as I did, rather than working with the cumulative figure. The 560,000 turns out to be well under half what you get if you add the current vapers and ex-vapers among ex-smokers from the statistics I cite above. So their number already incorporates some unexplained discounting from what appears to be the cumulative number. But since I am baffled by this disconnect, I will just leave this sitting here and proceed to look at what they did with that number.

As far as I can understand from their rather confusing description of their methods here, their first step is to eliminate those who were already vaping by 2014, and thus did not switch in 2014. That makes sense, though it would have been easier to just start with that. When they do this, they leave themselves with 308,000. So they started with something much lower than what you get from the statistics I looked at, and ended up with something that is half-again higher than the rough estimate from those statistics. Um, ok — just going to leave that here too. But the higher starting figure makes it even more difficult for them to explain away the reality check.

Their next step is the only one that seems valid. They estimate that 9% of ex-smokers who became vapers did so sometime after they had already completely quitting smoking, and subtract them. This is plausible. An ex-smoker who is dedicated to never smoking again still might see the appeal of consuming nicotine in a low-risk and smoking-like manner again. (Note that this should be counted as yet another benefit of e-cigarettes, giving those individuals a choice that makes them better off, even though the “public health” types would count it as a cost because they are not being proper suffering abstinents. It might even stop them from returning to smoking.)

Of course, this only makes a small dent. So where does everyone else go? Most of them go here:

It has to be assumed on the basis of the evidence [6, 7] that only a third of e-cigarette users who stopped smoking would not have succeeded had they used no cessation aid

…and here:

It is assumed that, as with other smoking cessation aids, 70% of those recent ex-smokers who use e-cigarettes will relapse to smoking in the long term [11]

This takes them down to 28,000.

Taking the latter 70% first, any limitations in relying on a single source for this estimate (another West paper) are overshadowed by: (a) There is no reason to assume switching to vaping will work as poorly, by this measure, as the over-promising and under-delivering “approved” aids that fail because they do not actually change people’s preferences as promised. Indeed, there is overwhelming evidence to the contrary. (b) Many of those in the population defined by “started vaping that year and were an ex-smoker as of the end of the year” have already experienced a lot of the “long term”. That is, if we simplify to the year being exactly calendar 2014, some people joined that population in December, and thus a (correct, undoubtedly much lower than 70%) estimate of the discounting between “smoking abstinent for a week or two thanks to e-cigarettes” and “abstinent at a year” (a typical measure for “really quitting” as noted above) is appropriate. But some joined the population in January and are already nearly at the long term. On average, they will have been ex-smokers for about six months, and being abstinent at six months is much better predictor of the long run than the statistic they used (which, again, is wrong to apply to vaping). Combining (a) and (b) makes it clear that this is a terrible estimate.

As for the first of those major reductions, references 6 and 7 do not actually provide any reason that “only a third…has to be assumed”. Those are the same references they cite for the 2.5% above. So this is just a reprise of the 2.5% claim, and suffers from the same errors I cited above.

You see what they did there, right? The reality check I offered is “your results imply that 90% of new ex-smoker vapers did not quit because of vaping; can you explain that?” Either anticipating this damning criticism or by accident, they provided their answer: “Yes, we assume — based on nothing that remotely supports the assumption — that 70% of them would have quit anyway (and 9% were already ex-smokers, and some other bits).”

This step basically sneaks in the same fatal assumptions from their original calculation but is presented as if it offers an independent triangulation that responds to the criticism that their original calculation has implausible implications. Here is a pretty good analogy: Someone measures a length with a ruler that is calibrated wrong by a factor of ten. They are confronted with the fact that a quick glance shows that their result is obviously wrong. So they make a copy of their ruler and “validate” their results with an “alternative” estimation method.

Oh, and at the end of this they knock off another 6000 based using what appears to be double counting, but at this point who really cares?

Conclusions

Their first version of the estimate is driven mainly by their assumption that attempting to switch to vaping is close to useless for helping someone quit smoking compared to unaided quitting, and also that all those who attempted to switch would have tried unaided quitting in the absence of e-cigarettes. There are also other errors. Their second version is based on the “reasoning” that because we have assumed that attempting to switch to vaping is close to useless, it must be that most of those who we have observed actually did switch to vaping must have not really quit smoking because of vaping — and so (surprise!) approximately the same low estimate.

So nowhere do they actually ever address the reality check question:

Seriously? You are claiming that almost everyone who ventured into one of those weird vape shops, who spent hundreds of pounds on e-cigarettes, who endured the learning curve for vaping, who ignored the social pressure to just quit entirely, and who decided to keep putting up with the limitations and scorn they faced as a smoker and would still face as a vaper, that almost all of them were someone who was going to just quit anyway? You are really claiming that almost all of them said, “You know, I think I will just quit buying fags this week — oh, wait, you mean I instead could go to the trouble to learn a new way of quasi-smoking and spend a bunch of money on new stuff and keep doing what I am doing it even though I am really over it and ready to just drop it? Where do I sign up?” Seriously?

Reality. Check. (And mate.)

For what it is worth, if you asked me to do a back-of-the-envelope estimate for this, I would probably go with something like the following:

There were about 200,000 new vaping ex-smokers. It seems conservative to assume that about half of them quit smoking due to vaping. 100,000. Done.

That is obviously very rough, and the key step is just an educated guess. But an expert educated guess is often far better than fake precision based on obviously absurd numbers that just happen to have appeared in a journal (as a measure of something — in this case, not even the same thing). In this case, it has far better face validity than West et al.’s tortured machinations.

Advertisements

What is Tobacco Harm Reduction?

by Carl V Phillips

In response to a couple of recent requests and my schooling of FDA in a recent Twitter thread, it seems time for me to again write a primer on the meaning of tobacco harm reduction (THR). Rather than return to a previous version I have written, I am doing this from scratch. This seems best given the evolution of my thinking and changing circumstances.

The key phrase, of course, is “harm reduction”, with “tobacco” denoting the particular area it is applied to. This is important: THR is not a concept that stands apart from HR. It means “the principles of harm reduction, applied to the use of tobacco and nicotine products, and other products that tend to get lumped in with them” (see my previous post for an explanation of that last bit and some other useful background about the current politics). Indeed, when my university research and education group was trying to decide on a name and URL in 2005, it was far from obvious that this was the right term, and we considered others (e.g., “nicotine harm reduction”). While the first prominent use of “THR” appeared in 2001, it was far from established as a common term. (There is probably some endogeneity here, of course — if we had chosen a different term, that might have ascended instead.) In any case, the key to answering “what is THR” is asking “what is HR” rather than thinking it is something different. Continue reading

The War on Nicotine begins

by Carl V Phillips

It has become a habit of many e-cigarette defenders to refer to recent chapters of the War on Tobacco as a war on nicotine, in part because they do not like their favored product being called a tobacco product. As for that motivation, yawn, whatever. But as for the statement, it was simply wrong.

The war on smoking in the USA morphed into a war on tobacco, which basically meant lumping in approximately-harmless smokeless tobacco with the not unreasonable original target of the war. This pretty much tracked the tobacco control industry’s professionalization (read: it went from being a noble — though obviously not universally embraced — hard-fought political cause to a venal business that had a license to print money and was constantly seeking new streams of revenue). Elsewhere in the world, the war was expanded to include Scandinavian smokeless tobacco as well as South Asian and other dip/chew products. Thus it was that for most of recent memory, the War on “Tobacco” was a ridiculously wealthy cabal of a few thousand people (with millions of useful idiots, of course) gunning for consumers and producers of smoked tobacco (tobacco, harmful), Western smokeless tobacco (tobacco, approximately harmless), and other oral products (not tobacco, often harmful).

When e-cigarettes finally became a major commercial product, after a remarkably long delay (which is, of course, a very interesting story, but not the present story), the Tobacco Warriors chose to add them to the list of targets. Thus the war became still more gerrymandered to include e-cigarettes. It was still a fairly well-defined single war, defined in much the same way that World War II was a war, despite really being two largely separate major wars and a few dozen border wars, tribal wars, and colonial struggles.  The war is and was defined in terms of what a particular faction did: it was the Anglophone major powers, plus whoever happened to fight one of the same enemies for whatever reason, versus everyone they fought.

As with WWII, the current enemies of tobacco control (which, interestingly, can also be defined largely in terms of government action by the Anglophone major powers) are increasingly not allies of one another. Perhaps that is a tactical error, but they (we) do have rather conflicting interests. But the Tobacco Warriors themselves — a fairly tightly-knit group of agencies, sock puppets, and funders, working together and maintaining remarkable party discipline — make it a war. They also draw boundaries around it: Despite this being fought like every other awful war on drugs, the people involved barely overlap with the traditional Drug War cabals (and, indeed, often actively oppose them despite looking just like them, but that is yet another story).

You can muse about whether there is a better name than “the War on Tobacco” for whatever this is. But one candidate name that was clearly wrong was “War on Nicotine”. For one thing, not all of the targets of the war even contained nicotine. But more important, nicotine in isolation was their thing. It was their peeps who praised, touted, and sold nicotine in its “proper” medicinal form (never mind that NRT is primarily used for the same purpose as the products that are the main targets of their war). One of their favorite go-to tropes was still that cigarette companies’ introduction of lower-nicotine products in the 1970s was some evil plot.

And then something very strange happened. Very strange. Over the last few months, the U.S. FDA suddenly embraced a long-discredited anti-nicotine policy proposal. They announced a policy goal of forcing cigarette manufactures to lower the nicotine content of their products. (Well, legal cigarette manufacturers. The black market would inevitably replace the banned current products — one of the many reasons why this proposal is long-discredited.) Part of this has been an unending blast of government-sponsored anti-nicotine propaganda. The propaganda asserts — without any evidentiary or serious theoretical basis, needless to say — that forced nicotine reductions in cigarettes is the silver bullet that will “keep all new generations from becoming addicted blah blah blah”.

(Aside: I cannot overstate the strangeness and suddenness of this policy. Basically the only people who still supported this zombie idea were those who stood to profit from it. And then suddenly it was at the center of — indeed, is basically the entirety of — FDA’s tobacco policy. I strongly encourage someone who has the time and platform to make it worthwhile to investigate whether there is a money trail from the very small number of companies for whom this policy is an enormous windfall to the pockets of Price or Gottlieb — it is not like there is no history of corruption there. It is probably also worth checking Zeller and company, though they are not the variable here, so that seems like a long-shot. And the Trump campaign, of course, though given that the White House has not managed to put a government in place, it would have been quite a coup to push down such detailed policy from that far up the chain.)

Meanwhile there was the recent paper from Glantz’s shop, elegantly shredded by Chris Snowdon, in which the authors feebly attempt to tar NRT (their nicotine) as part of the evil machinations of the cigarette industry in the 1990s. I won’t even try to explain — there is nothing remotely defensible about it; read Snowdon if you want details. The importance is that Glantz’s current role is as a paid surrogate for FDA. This cannot be coincidence. FDA and tobacco control cannot comfortably fight a war on nicotine when the nicotine-iest products out there are their products that they have always embraced. So they need to muddy the waters round those products. What better way than to manufacture retroactive innuendo that NRT always was a brilliant cigarette industry plot that the hapless tobacco controllers fell for, and not the colossal screw-up on their own part that it was? That exact ploy has worked for them before.

FDA’s Center for Tobacco Products has always been a propaganda shop (they have certainly never been a real regulator). But previously their propaganda was lame pointless messages pitched at ignorant consumers (who do not even know CTP exists, let alone see their messaging), perhaps to provide memes for their useful idiots to publish (and, again, not actually be seen by anyone in their target audience). The current effort is different in terms of both volume and apparent purpose. You can see the volume by checking out the Twitter feeds of @FDATobacco and FDA Director @SGottliebFDA, and also see the content there and by following the links.

This is not the usual background noise of silly anti-tobacco propaganda. This is a clear example of a fixture in the U.S. political system: a concerted push by a government faction to sell their policy. (The most recent high-profile example of this was from the faction trying to destroy the Affordable Care Act.) The target audience for this includes lazy reporters, who will just transcribe the propaganda and get a free byline, and influential pseudo-experts (aka, useful idiots) who do not know enough to not believe everything they read. The general public, the apparent target of CTP’s previous propaganda, is at most an afterthought as an audience. But the most important audience for these propaganda efforts are others in government, or who have similar levels of policy-making influence, trying to persuade those on the fence and to bludgeon those who might oppose the policy.

For example, there was this from NCI (part of the National Institutes of Health, which along with FDA is part of HHS) that came out just after the Glantz propaganda dropped and as it was being touted by FDA and their surrogates.

The cabal at FDA will find it hard to run a full-on War on Nicotine if NCI actively opposes them. Similarly, there are presumably a lot of tobacco controllers further down in government, and in political organizations, who still embrace the old (correct) notion that nicotine — especially their nicotine — is not the problem. Most of them are just puppets, and will dutifully recite that we have always been at war with Eastasia …er, with nicotine, as soon at they get the message. Others can simply be silenced by the deluge from the agency that has more money than the rest of tobacco control combined. That is the playbook for this kind of inward-directed propaganda.

And so we have, for the first time, an actual War on Nicotine. Note that this does not mean the whole war can be relabeled The War on Nicotine for reasons noted above. This is just part of it. We are still stuck with “War on Tobacco (etc.)” for the larger effort unless someone can come up with something better.

Some commentators who focus only on e-cigarettes appear unaware of what is really happening. Gottlieb and FDA substantially delayed the implementation of the stealth ban on e-cigarettes and have made various noises about embracing e-cigarettes as a low-risk alternative to smoking. So, hey, everything looks good for e-cigarettes!! Some of those commentators have even bought into the FDA propaganda that they FDA policies support harm reduction (at utterly Orwellian claim which I will address in my next post or you can check out my Twitter thread). However, since e-cigarettes are basically a nicotine delivery device, how can there be both a war on nicotine and a more pro-ecig policy?

Indeed, how?

One possible explanation is that FDA is signaling a plan to shift toward the position of British tobacco controllers who have seized control of the vaping mindspace there, intending to use e-cigarettes as just another weapon against smoking and smokers. That playbook involves keeping just enough of a boot on vaping to keep it from being accepted as a normal personal choice (it is only a smoking cessation medicine!), and staying in a position to squash it when supporting it becomes no longer politically expedient.

It could be that. But I find it genuinely hard to find that explanation in what we are seeing.

The two messages are simply too flatly contradictory. It is not exactly novel to see messaging from governments that includes policy proposals alongside stated support for goals that are antithetical to those policy proposals. Especially from this government and from this agency — after all, we heard basically the same happy talk about e-cigarettes even as FDA was marching toward a total ban as rapidly as they could. Obviously anyone other than lazy reporters and political actors who are looking for plausible deniability when they fall in with their faction’s bad policies should focus on the policy, not the contradictory happy talk.

But many do not. Thus this happy talk serves the rather obvious purpose of getting e-cigarette advocates — the most vocal and potentially politically effective opponents of a new War on Nicotine — to sit on their hands until the actual policy goal (whatever its crazy or corrupt motivation) has enough momentum. So we can expect no overt anti-ecig actions by FDA for a while. They still will not approve any new products (so there will be those temporarily grandfathered into minimal paperwork in 2016, and an a high-paperwork maybe-denial grey-zone for later products, still leading to the full-ban in 2022) or allow any merchant claims about the low risk. They are just pausing, not retreating. They might withdrawal their proposed de facto ban of most smokeless tobacco, issued under the guise of being a health and safety regulation, though frankly that would probably only be because it will never survive judicial review (smokeless tobacco and harm reduction advocates are a much smaller voice than e-cigarette advocates).

But if they gain momentum for their War on Nicotine policy, things will probably go downhill quickly. Implementing a substantive policy (for the first time ever) will empower FDA to go ahead and fight the e-cigarette advocates they temporarily appeased. It seems impossible that a hugely impactful and crazy expensive policy of cigarette nicotine reduction, for the chiiiildren, will not spawn limits on e-cigarette nicotine density and “child friendly” flavors. With the delay of the full-on e-cigarette ban they no longer have the luxury of not even trying to actually regulate the products; FDA will be wanting to hurt vapers through other means.

If the proposed policy is quashed things are a bit harder to predict. Perhaps FDA will be shy to take on more fights. Perhaps there could be a real change of heart, but it would be the height of foolishness to read that into the same old rhetoric. Perhaps the political party that controls our government is really so deeply dedicated to consumer welfare and free choice, as some advocates seemed to think before the election, and they will clean house at CTP and change its direction (haha — kidding, of course — if that turns out to be the case, I vow to print this out and eat it).

But it seems most likely we would still see e-cigarette “regulation” that serves only as harassing partial bans as soon as they are no longer all-hands-on with their current policy. That is consistent with everything they have done so far. Moreover, it seems especially difficult for them to walk back on e-cigarettes after campaigning for a War on Nicotine for a year and convincing their useful idiots that we have always been at war with nicotine.

Simple Simon would refuse to meet the pieman

by Carl V Phillips

I prefer to write about science (or those pretending to do science, or those who cannot seem to report on it accurately) and the activities of government actors. I very much appreciate the efforts of those who take on blowhards who just pontificate their opinions, but it is just not my thing. However, every now and then, one of the blowhards says something that is really useful as social science. Case in point, a recent tweet by Australian Simon Chapman.

Many of you are familiar, and there is a flurry of recent posts about him because of the current attacks on e-cigarettes in Australia. Others who write about him often label him Simple Simon. Usually I try to avoid making jokes of people’s names, simple because it is always old material: I am sure that Chapman has been called “Simple” regularly since nursery school. But in this case it was too perfect a fit not to use. For those not familiar with Chapman, just imagine Donald Trump in a very small pond. It is a remarkably close analogy: the apparent seething directionless hatred which seems to be salved by political activism, the frequent assertion of obvious falsehoods, the urge to spend time on personally abusive trolling despite ostensibly being a professional and a grownup, and owing his (socially harmful) success to having impressive skills as a con man.

I will not bother to link the tweet because he blocks everyone who ever bests him at verbal sparring, which I am guessing is a lot of you. It was in the context of the Australian government’s consultation (request for comments and inputs) on their e-cigarette policy, which basically comes down to the question of whether to maintain the current ban. He was arguing (to use the term loosely) that manufacturers and merchants should be excluded, because, according to him:

Police don’t meet with drug dealers either.

Here is a screencap (credit for that and for inspiring this post to @K_d_a7):

What makes this piece of pontification interesting is not just that it is so obviously wrong, but what it says about Chapman and his ilk.

Of course, police meet with drug dealers. I am not talking about interrogating them or trying to get them to flip on their colleagues — I’ll give Chapman the benefit of the doubt that he meant to exclude such events (though this is charitable, since he did not say so). Anyone who works on the social science side of public health issues that run up against policing — like, say, a sociologist who works in tobacco control — will have read interesting accounts of such meetings. Well, will have read them if he has an agile mind and actually reads, rather than spending all his time pontificating and trolling. My colleagues who focus on fully banned drugs know the details better than I do, certainly, but it is hard to not be generally familiar.

But here’s the thing: You could figure it out even if you were completely unaware of the history of police meeting with drug dealers. A passing familiarity with the human condition would tell you that it must happen. The only time a faction in a political/economic/military/whatever struggle would never want to talk to another faction is if they had absolutely no common interests. This is the case approximately never. Even states engaging in total war still share an interest in, e.g., POWs not being tortured and executed and in letting civilians escape from besieged cities. Only players in two-player board games have no common interests and nothing to talk about. Once you move beyond such games to something that is merely as real-world as a football match, the players and teams have shared interests, such as agreeing to not let the match degenerate into an injury-producing brawl.

Some drug dealers might want to get out of the business. Even if a cop would rather have a charge stick and make them a burden on the state for twenty years, he might still settle for helping them get out (and a good cop would prefer the latter, of course). The police might have an opportunity to encourage dealers to engage in better business practices, such as reducing violence — anything from heading off a gang war to reducing everyday threats to innocent bystanders. They would do this even though this might not reduce the flow of drugs. Indeed, they might do this even if they were fairly sure it would increase the flow, since they do not share tobacco control’s obsessive and antisocial priorities. If the police discover that a deadly batch of a drug is on the streets, they should (and presumably usually do) alert the relevant dealers they know.

All of these represent examples of the shared goals that inevitably exist even between factions who are mostly adversaries. They also have some fairly obvious analogies in the shared goals of the tobacco industry and (real) public health. Only someone with a “death to them all!” attitude — i.e., not someone who really cares about public health, or human beings for that matter — would disagree. We are talking ISIL or Duterte levels of inhumanity. Of course, tobacco controllers have praised both ISIL and Duterte, so that is who we are dealing with.

Chapman is notorious for declaring that anything that tobacco companies object to must be good — he calls it “the scream test”. Who knows whether he actually believes this, or whether he believes anything for that matter — see aforementioned comparison to Trump. In particular, he absurdly concluded that because major tobacco companies object to brand confiscation (plain packaging), citing as reasons that it aids black marketeers and increases sales of cheaper non-name-brand products at their expense, it means that it must be a good policy. Um, yeah. By his logic, since people at tobacco companies are opposed to an asteroid collision exterminating humanity, then that too must be a good thing.

As if to assist my analogy, I ran across this tweet while writing this post, about the discourse in American politics:

Anyway, the point here is not to observe that Chapman tweeted something that is obviously false. It is not as if that list needs any additions. As with Trump, it is basically an everyday occurrence, saying something he thinks is clever but that just demonstrates he does not understand the world. Well, it demonstrates that to most people — the two men’s bases, which have similar levels of thoughtfulness, presumably eat it up.

But as with Trump or Pravda, there is often something to be learned from content even when you know the goals of the author did not include conveying honest information. In this case, it is insight into tobacco controller’s tendency toward an expansive version of the mirror-image delusion. As I discussed in more detail here, the mirror-image delusion describes people with limited insight who assume that whoever they are looking at across a table is just like them. This has come up in the contest of the Australian consultation, with tobacco controllers accusing everyone who submitted an objection to the ban on e-cigarettes of being a paid shill. Others have written about the obvious irony of people who only act when they are getting paid accusing honest people — who just want the right to control their bodies and use a preferred alternative to cigarettes — of being shills. Chapman’s comment makes clear that his ilk are similarly deluded about everyone’s motives.

They really think they are playing a two-player winner-take-all board game against the tobacco industry, or perhaps against the genus Nicotiana itself. They often appear oblivious to the primary stakeholders, the consumers. But it is really worse than that. If it is a two-player winner-take-all game, then those consumers are actually the enemy. (I explore that theme further in what I consider my best post ever.)

That is not a new insight. But it might mean something somewhat different when considered in the context of thinking everyone in the world is their mirror image. In their mind, everyone acts as if the world were a board game. Cops do not meet with criminals because they have no common interests. There are no peace or disarmament talks. Political caucuses do not negotiate deals with each other. Those doing real public health work — health inspectors and advocates who wish to improve nutrition — never talk with the pieman. Tobacco controllers did not negotiate deals with the major American cigarette companies to create the mutually-beneficial Master Settlement Agreement and Center for Tobacco Products.

Of course that last bit reminds us that tobacco control attracts a certain kind of grifter, who just sees it as a good way to make money (not just in the USA — see also: ASH; also, recall the Trump analogy). But the people who those grifters let serve as their public face are a special bunch for another reason. They are people who never grew beyond the male adolescent worldview that the world is all about two-player board games. They are not just pretending to believe that everyone thinks that way. They give commendations to Duterte not because they are setting aside his ruthless behavior in policing — though that alone would be unforgivable — but because they admire and envy it (note Trump analogy again, though I doubt it is too subtle here). That, to them, is the right way to run a police force, not with all that silly treating all people as human beings who are worthy of concern like this loser.

Needless to say, I am not gleaning all this from one tweet by one serially abusive troll. It is based on a pattern of observation and was merely made stark by that tweet. The fact that tobacco control is a comfortable home for those who genuinely feel that cops should just be executing the “bad guys” is yet another reason why real public health needs to disavow tobacco control just like they do other drug warriors.

(update) Postscript: Saw this ten minutes after posting. Presented without further comment:

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (postscript)

by Carl V Phillips

For completion of this series (with this footnote), the following is what I submitted to FDA. My comment does not yet(?) appear on the public docket as of this writing. But I got a confirmation (conf code 1k1-8xfb-dhwh if you want to search for it later). It has a bit of extra content beyond what I already presented.

I know a few of you urged me to rewrite my analysis in a more, er, formal manner. While I understand their reasoning for doing so, I chose not to take time from my other obligations to do that.  I honestly think it does not make any difference. I am reasonably confident that FDA “fulfills” their obligation to consider all the comments by having a low-level staffer read on each one, without reporting anything of substance up the chain, so they can check a box that says they read and considered each of them. If this proposed rule is not withdrawn for political reasons or as a result of the various procedural problems, then whoever is pursuing a lawsuit to strike it down can enjoy my essays as they mine them for substance. (Shameless plug: Of course, if they would like to hire me to formalize anything, I am quite good at that.) Besides, I might manage to embarrass that staffer who reads it into going into a more honorable line of work.

The content follows:


The primary purpose of this comment is to demonstrate that FDA’s assessment of the supposed benefits of this rule (115 fatal cancers averted per year) is fatally flawed for approximately half a dozen reasons, each one of which is sufficient to invalidate it. I have published the analysis in the following three blog posts, which I incorporate into this comment by reference:

https://antithrlies.com/2017/06/26/fdas-proposed-smokeless-tobacco-nitrosamine-regulation-innumeracy-and-junk-science-part-1/

https://antithrlies.com/2017/06/29/fdas-proposed-smokeless-tobacco-nitrosamine-regulation-innumeracy-and-junk-science-part-2/

https://antithrlies.com/2017/07/02/fdas-proposed-smokeless-tobacco-nitrosamine-regulation-innumeracy-and-junk-science-part-3/

(I have also attached printouts of them for completeness, but I would suggest reading the online versions with live links.)

The implication of that analysis is that there is no scientific basis for claiming that any disease incidents will be prevented by this rule, let alone the specific quantity claimed by FDA as the rule’s justification. Based on this alone, the rule should be withdrawn.

This analysis should not be interpreted as implying that if, counterfactually, the 115 figure were actually science-based, then it would justify the rule. There is no analysis of the negative health impacts from driving smokeless tobacco users to smoking when their preferred products are banned. The absence of this analysis is another sufficient reason for withdrawal of the rule. Moreover, even there were a legitimate reason to believe there were health benefits, and even if there were no health costs, justifying this rule would require a cost-benefit analysis that considered the welfare loss to consumers and other costs. The absence of this analysis is yet another sufficient reason for withdrawal of the rule.

Finally, given the lack of cost-benefit analysis of any sort, there obviously is no justification for choosing the particular quantitative standard in the proposed rule (even apart from the fact that it appears to be 1/4 of the intended quantity). This makes the choice of the standard arbitrary and capricious. It appears it must have been chosen with an eye to which particular winners and losers it would create, as I presented in this footnote to the previous analysis here (incorporated into this comment by reference and also attached):

https://antithrlies.com/2017/07/09/sunday-science-lesson-toxicology-and-the-chains-in-american-football/

While not central to the main point of this comment, this is a further problem with the legitimacy of this rulemaking.

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (part 3)

by Carl V Phillips

In Part 1 of this series, I described FDA’s proposed rule that would require smokeless tobacco products (ST) to have no more than 1 ppm of NNN (a tobacco-specific nitrosamine or TSNA) dry weight. I discussed some of the political and policy implications of this, and reasons why the rule will probably not survive. I also noted that almost no current products meet that standard, and that American-style ST probably cannot meet it. Despite the proposed rule probably being mooted, I noted there is still value in examining just how bad the ostensibly scientific analysis behind it is. In Part 2, I noted that the FDA’s estimate the standard would save 115 lives per year is premised on their estimate for the risk of oral cancer caused by ST use. But, in fact, the evidence does not support the claim that ST use causes any oral cancer risk. I then focused on why, even if one believes there is some such risk, the method used to calculate FDA’s quantitative estimate is utter junk science.

So far, none of that has addressed NNN itself, and how meeting the NNN standard would affect the carcinogenicity of ST, if it is carcinogenic. It turns out that this part of FDA’s analysis is even worse than that discussed in Part 2.

Estimating the health effect of a quantitative standard for an exposure is a matter of estimating the relevant range of the dose-response curve, along with knowing how much people’s dosage would change. That is, you need estimates like, “N people use product X, which has 5 ppm NNN, which causes Y risk per person, versus the Z risk per person from 1 ppm, so multiply N by (Y-Z)….” With such numbers we could estimate the effect of an adjustment in the NNN concentration.

In reality, it is not that simple. In Part 1, I pointed out that most products could not just have their NNN concentration “adjusted” like that, and that they would have to be fundamentally changed, effectively eliminated and replaced in the market (perhaps if FDA had not made the arithmetic error noted in Part 1, that would only be “some” rather than “most”). Many consumers of the eliminated or fundamentally altered products would not be happy with the new option. Some would just quit, eliminating the Y risk as well as any other risk from using the product (setting aside that as far as we know are both nil; remember, we are down that rabbit hole here). Some would switch to smoking, creating a risk that is orders of magnitude greater than anything discussed so far, making all of the details moot: the net health impact would be an increase in risk.

But that is the simple practical criticism of this madness, one that hinges on questions of consumer behavior (an area where FDA’s analyses are consistently absurd, but they always manage to trick their audience into accepting their assertions). That is not what I am doing here, though I suppose I just did it in one paragraph. My goal is to point out that the FDA core claims about benefits here are based on junk science, setting aside the enormous costs that would dwarf them anyway. So returning to my point here, what basis do we have for estimating Y, Z, and other points along the dose-response curve?

None.

Absolutely nothing.

Indeed, we do not even know that NNN in ST affects cancer risk at all.

As I mentioned in Part 1, if you are only familiar with the rhetoric about this topic, and not the science, you would be forgiven for not knowing that the assertion there is any such effect is based only on heroic extrapolations and assumptions. You might further surmise that since FDA claims that this reduction would reduce cancer deaths by 115 per year (note: not “about 100”, but as precise as 115), there is not only evidence that NNN in ST causes cancer, but there is also so much evidence that we can precisely estimate a dose-response.

What we know about NNN and cancer is based on biological theory (we have evidence that some nitrosamines cause cancer in humans), and the effects of exposing rats, hamsters, and other critters — species whose propensity to get cancer from an exposure is often radically different from ours, and even from one another’s — to megadoses of NNN. Those toxicology studies do suggest that NNN exposure probably causes cancer in humans, in a big enough dose, and under the right circumstances. Of course, that is also true for almost everything. When IARC, the cancer research arm of the WHO, made their blatantly-political decision to declare NNN a known human carcinogen, they did so in violation of their own rule that there has to be some actual human exposure evidence before making such a declaration. There is not. But even if someone believes that NNN in ST does cause cancer in humans, the rodent megadose data obviously does not tell us anything about the effect of the reduction in dosage imposed by this rule.

Stepping back, it is useful to understand the potential legitimate use of toxicology studies like those. They — or, better, in vitro studies of cells that are actually similar to the human body and do not require sociopathic torturing of innocent animals — are useful for giving us a heads-up that a chemical or combination of chemicals might be carcinogenic or poisonous. This might be a good reason to undertake the more difficult search for epidemiologic evidence that the real-world version of the exposure is causing the bad outcome. Or at least a reason to pursue the in-between step of looking for biological evidence of harm from the real-world exposure in humans. It might even be sufficiently compelling to prohibit introducing a novel exposure, acting before we can even get any human data.

If toxicology studies of a chemical all fail to produce a bad outcome, this strongly suggests that the exposure will not cause the harm, so long as that failure is consistently confirmed using various toxicology methods (claims that a single toxicology study shows that an exposure is harmless, which are currently appearing in the pro-vaping rhetoric, are misguided). But getting a bad outcome in a particular toxicology study does not mean that the real-world exposure actually does cause harm. The pattern in the toxicology has to be far better than what we have for NNN before such a conclusion is justified, including getting the effect at reasonably realistic exposure levels and fairly consistently across a variety of methods.

Consider an analogy: We are interested in knowing whether there is life on other planets, but actually going there to take a look is rather difficult. We have a much cheaper tool in our toolbox, however, which is to use modern telescopes to see if light scatter suggests a water-rich atmosphere. Of course, that is far short of observing life; it would be insane to say “we saw evidence of water, so there must be life there!” But since the versions of life that we understand require there to be enough water, seeing that creates the intriguing possibility of life. Failing to find water tends to rule out the possibility of life as we know it.

Another legitimate use of toxicology is to tell us why an exposure is causing harm. Of course, this should mean there is evidence of harm, not just some wild assumption that there is harm. Continuing the analogy, pretend that someone looked at the light scatter around Mars and claimed they saw enough water to support life: “Aha, this shows that the canal-building civilization is water-based life as we know it.” Um, but you do know that early 20th century telescopes debunked that 19th century canals myth, right? Also we have had numerous close observations of the planet and little labs driving around on the surface. Your hint about the possibility of life is utterly pointless given that we have much better information about the reality.

I have often described the TSNA toxicology research, which inexplicably continues to this day, as an attempt to identify which chemical pathways cause a cancer outcome that does not actually occur. As with Mars not having canals, we know that ST use does not cause a measurable risk for cancer, and therefore the NNN and other TSNAs in ST are not causing a measurable risk (unless we think that other aspects of the ST exposure prevent exactly as much cancer as the TSNAs cause, something that no one is seriously proposing). One possibility that has been seriously proposed — e.g., by Brad Rodu, whose work I cited in Parts 1 and 2 — is that something else in ST, perhaps antioxidants, directly negates whatever cancer-causing effect the TSNAs might have if we were exposed to them alone (which does not happen at a level beyond a few stray molecules). Indeed, when the exposure is tobacco extract, those rodent studies fail to show the carcinogenic effect from NNN, or anything else in ST for that matter, a fact that is conveniently glossed over.

So how did we end up with the “fact” (which I suppose should be called the fake news in current parlance) that NNN and other TSNAs in ST cause cancer? It basically comes down to circular reasoning, or perhaps it is figure-eight reasoning since there are two circles as well as a few other fallacies. It goes something like this (and I am really not exaggerating):

“Given that we have only seen an effect in megadose rat studies, how can we really be sure that TSNAs at the relevant dosage and in a realistic exposure cause cancer?”

“Because smokeless tobacco causes cancer, and it contains TSNAs.”

“But [even setting aside that we do not know that is true] how could you know it was the TSNAs causing it.”

“Because we know TSNAs cause cancer.”

“Um, isn’t that so transparently circular that even tobacco control’s useful idiots will see right through it?”

“There is more. We know that higher-TSNA products cause more cancer risk.”

“Ah, now that sounds like actual evidence. Please explain.”

“US products have higher TSNA levels than Swedish products, and US studies show a cancer risk while Swedish studies do not.” [Note: see appendix to this dialogue, below.]

“But didn’t you read Part 2 of this series? That contrast does not appear in studies of modern US products, but only from a few studies of an archaic type of product.”

“Yes, exactly. That product was very high in TSNAs, and its cancer effects were off the charts compared to modern products. Case closed.”

“There are no measurements of the TSNA levels of those archaic products. How do you know they had high TSNA levels?”

“Isn’t it obvious? They must have, because they caused cancer and TSNAs cause cancer.”

Loopity loopity loop.

In fairness, there are honest observers, including Brad Rodu, who hypothesize that this is indeed the reason the archaic products apparently caused cancer. But this is just a hypothesis, and it cannot be tested. Indeed, we cannot even replicate the basis for claiming those products caused cancer in the first place. It basically comes down to a single study from the 1970s — not exactly overwhelming evidence.

A bit more useful background: In the 2000s, the anti-ST crusaders in and funded by the US government (CDC and NCI, before FDA joined the game) fought a rearguard action against the evidence that had emerged from Sweden that ST was approximately harmless. Part of this was insisting that the higher levels of TSNAs in US products meant that the Swedish evidence was not informative. It was political bullshit on its face. Still, I wrote an analysis over a decade ago that showed that the ST products that produced those null results in Sweden had about the same TSNA levels as then-current US products. (This was based on limited analytic chemistry from before 2000. There were only a handful of TSNA concentration studies in the public record. But there was enough to show this.) TSNA levels in all styles of ST products were and are decreasing over time. It might have been true that 1990 US products were materially more hazardous than 1990 Swedish products (which showed no measurable risk) because they had higher TSNA levels. But mid-2000s US products had low enough TSNA levels that this would have no longer been true. This leads to the appendix for the dialogue. We could imagine this variation:

“US products have higher TSNA levels than Swedish products, and US studies show a cancer risk while Swedish studies do not. Also there is a time trend, wherein TSNA levels have been dropping in both US and Swedish products, and older studies found elevated cancer risks, while newer ones do not.”

“Part 2 of this series dismisses your first sentence. But the second sentence makes some sense, though it might just be because the older studies used really primitive methodology. Still, you have a prima facie valid point there, unlike all your other complete bullshit. But, hey, doesn’t that also mean you are conceding the fact that modern ST products do not cause any measurable cancer risk, even if older products might have?”

“Er, no. We never said that. We never made any claim about time trends despite it being the most scientifically defensible argument we have. Strike all that from the record.”

Summarizing this, we have only unsupported hypotheses and circular reasoning behind the claim that NNN in ST causes any of the (quite possibly zero) cancers caused by ST. Given this, we obviously know nothing about how much cancer a particular concentration of NNN causes. That is sufficient to show that FDA’s claim cannot possibly be science-based. But I am sure you share my curiosity about how FDA took this complete lack of information and turned it into the conclusion that exactly 115 lives per year would be saved by this regulation.

Here it is (from the proposed regulation):

….increase in oral cancer risk of 116 percent among smokeless tobacco users compared with never users. We then reduce this value by 65 percent based on toxicological evidence relating the estimated average reduction in the dose of NNN to lifetime cancer risk under the proposed standard. The result is a reduction in the estimated relative risk of oral cancer to 1.41 under the proposed product standard. FDA used the following calculation: (1 + (2.16−1) × (1−0.65) = 1.41) for this determination.

Thanks, guys, for showing us how to do that arithmetic so I did not have to find a third grader to ask. The important bit of showing their work, of course, is about justifying the inputs. In the introduction, FDA refers the reader to section IV.C for the basis for the .65 figure. It is really section IV.D, because, hey, just because you spent a million dollars writing a regulation that is potentially devastating for industry and millions of consumers does not mean you should bother to have someone edit it. It turns out the assumption is that the dose-response is linear across all quantities, and under that assumption the effects observed from megadoses in rodents gives a dose-response that translates into .65. The generic problems with this include the fact that the linear (also known as “one hit”) model of carcinogenesis has long-since been dismissed as invalid, the folly of extrapolating orders of magnitude beyond the observed data, and the little matter that rodents are not people.

It gets worse still when you look at the equation that FDA used to calculate the fictitious linear trend. (And I am not referring to the fact that they actually cut-and-pasted the equation in their document as an image from some low-res PDF of someone else’s document. This is not a scientific flaw, of course, but, it does suggest the proposed rule was written by people who have so little education and experience in science that none of them had ever learned how to typeset a simple equation.) The equation builds in the assumption that a very high exposure for a short time (e.g., what the rats experienced) has the same effect as the same total exposure stretched out over many years. This is the linearity assumption taken to the extreme. It not only assumes linearity for each parameter — i.e., increasing years of exposure, increasing quantity per exposure, or increasing number of exposures per day by Y% increases risk by Y% — which is completely unsupportable and almost certainly wrong. It also assumes a multiplicative effect for all interactions, which is also unsupportable and almost certainly wrong. For those who did not follow that, I will explain its major implication: The assumption is that a given lifetime quantity, X, of NNN exposure creates the exact same total cancer risk whether it is consumed all in one day, or one month, or spread out over 70 years. It is the same whether an ongoing exposure takes place all at once each Monday morning or it is spread evenly throughout the week. Moreover, if you increase X by 10% it increases the risk by 10% no matter how the consumption is spread out. On top of all that, if someone’s body mass is 10% lower his risk from X is always increased by 10%. If his mass is 99.963% lower (i.e., he is a hamster and not a human) then the risk is increased exactly 2720-fold.

Such simplifying assumptions about linearity and multiplicativity are not terrible if you are interpolating (i.e., you have data from both sides of the quantity you are assessing and you are trying to fill in the middle) or are extrapolating a little bit beyond the range of your data. But in this case they are extrapolating orders of magnitude beyond the rat data. Weeks of exposure rather than decades, 30 g bodies rather than 75 kg, and crazy large doses. And, of course, there is the little matter of assuming that a different exposure pathway in a different species has the same effect of ST exposure in humans. The huge extrapolation means that the slightest departure of the assumptions from reality (and it is safe to say that the departures are more than slight), means that the final estimate is complete garbage.

It gets worse. The key parameter is what is multiplied by the total lifetime units of exposure in order to estimate risk, which FDA calls the “cancer slope factor” or CSF if you want to search for it in the document. For this, they rely entirely on a 1992 estimate from the California EPA, which itself was based on the results of a 1983 paper that looked at what happens when hamsters were given huge doses of NNN dissolved in their drinking water. Yes, really. FDA’s number ignores the ~99% of the relevant research that has been done in the last three decades, and it was obviously pretty sketchy even in 1992 given that it was based on a study whose real information value (about actual human exposures) was approximately nil. Moreover, there is this:

As defined by the EPA guidelines, the cancer slope factor (CSF) is “an upper bound (approximating a 95percent [sic] confidence limit) on the increased cancer risk from a lifetime exposure to an agent.

So apparently (the methods are reported so poorly that it is hard to be certain) they not only based this key number on evidence — to use the word rather loosely — from a single ancient toxicology study, but they did not even use the actual estimate that was generated from that. Rather, they used a larger number generated via an arbitrary process. The upper bound of a 95% confidence interval is a completely meaningless number in this context. There is an argument (which many would call dubious) that some arbitrary inflation of the point estimate like this should be used in “abundance of caution”-based regulations. (Update: More on this in my follow-up post.) But it is not an estimate of the actual effect. I know this seems like an arcane technical point in the context of everything else, but I cannot stress enough what an enormous failure of legitimate science this is (assuming they did what it sounds like they did). This would mean, for example, if there had been fewer observations collected in that 1983 study, but it had still supported exactly the same point estimate, FDA would be claiming some larger number of lives saved, like 125 per year rather than 115.

When presenting this number, and practically admitting it is junk (despite using it to calculate their estimate of 115 to three significant figures), FDA writes:

FDA welcomes public comment on whether there is a more robust CSF available for NNN.

This is a classic bit of anti-scientific rhetorical strategy. Anyone answering that question as phrased is implicitly conceding that the estimate FDA used has some validity. Respondents are effectively conceding that if they cannot make a compelling case that some other number is better, then FDA’s number was appropriate to use. When a question’s phrasing builds in invalid assumptions, or when it assumes away the really important questions (“Have you stopped beating your wife?”), the response needs to unask it, not answer it. So here is my unasking answer to their welcoming of public comment:

The number FDA used has absolutely no hint of validity. However, there is no robust, or even remotely plausible basis for generating this “CSF”; any number used here might as well be made-up from thin air. That said, given that ST does not seem to cause oral cancer in the first place, the best default estimate is zero. There is no legitimate basis for concluding an estimate of zero is wrong. Oh, and also if you are going to use a junk-science extrapolation from rodent studies, you should at least calculate this number based on all such studies to date. If you are not capable of doing that analysis, and instead are limited to using the approach any middle-school student would use if confronted with this question (run a search and blindly transcribe whatever someone once wrote), then you have no business regulating anything!

I’ll take a deep breath here, because that is still not all. Look back at that grade-school arithmetic they showed us. Notice any assumptions embedded in it? Yes, that’s right, they assumed that all the cancer risk that they claim is caused by ST is caused by NNN, and thus a .65 reduction in the risk from NNN exposure is a .65 reduction in total risk. Wait, what? FDA did some hand-waving in their document about reductions in NNN also carrying along reduction in another TSNA, NNK, but they never tried to justify the claim that the (supposed) cancer risk was all due to NNN or even NNN plus NNK. How could they?

Effectively, FDA has just declared that they believe that whatever the cancer risk (at least oral cancer risk) is caused by ST consumption, it is all caused by TSNAs and no other molecules contribute any cancer risk. They never suggested this was a simplifying assumption. This could have some amusing implications. The next time you see one of those anti-scientific bits propaganda about ST containing 27 carcinogenic chemicals (or whatever number they are making up that day), you can reply that FDA has declared that at least 25 of those do not actually cause cancer. On the other hand, we should probably not try to push this too hard on this. I am guessing that, given all the other errors, the authors of this rule did not understand their own arithmetic sufficiently to know they were implicitly declaring this to be true.

Returning to the life on Mars metaphor, and the dialogue motif, the “logic” behind the FDA analysis would map to something like the following:

“From my light-scatter observations, I have concluded that had the water density in the martian atmosphere been X, instead of the Y I observed, the civilization that built the canals would not have collapsed just after helping humans build the pyramids, but would have thrived for 1,150 more years.”

“Wait, what? There are no canals. There was no civilization. Ancient extraterrestrial visitation stories are just silly claims by people who do not understand science and technology. The rovers and other Mars exploration have already shown that if there is or was anything we might call life, it has had no perceptible impact, let alone built a civilization. There is not enough water to support an ecosystem now, and was not enough 5000 years ago. But even if there had been a civilization, there is obviously no basis for estimating how atmospheric water density affected it, let alone a way to predict its demise to three significant figures based on one observation. As a minor point, I am not sure from what you said whether you meant Mars years or Earth years, but I am guessing you do not even know they are different.”

I am not being hyperbolic when I say FDA’s proposed rule comes across as parody. It reads like someone concocted it in order to ridicule a collection of faulty common practices and reasoning in public health science, creating cartoon versions to highlight problems that are often subtle. Please reassure us, FDA, that this was intentional. Even more so, those of you at the Center for Tobacco Products might want to reassure your colleagues elsewhere in FDA that this is not what their once respectable agency has come to.

Alternatively, perhaps it was really a joke by outgoing officials, hoping for a *popcorn* moment when the new administration tried to defend the rule in court. Or maybe it was just a Dadaesque tribute to the day it was issued. I realize these do not seem like terribly likely explanations, but they are more plausible than believing that anyone with a modicum of scientific expertise thought that this hot mess was legitimate analysis.

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (part 2)

by Carl V Phillips

In the previous post, I gave some background about the new proposed rule from FDA’s Center for Tobacco Products (CTP) that would cap the concentration of the tobacco-specific nitrosamine (TSNA) known as NNN allowed in smokeless tobacco products (ST). Naturally, I think you should read that post, but to follow the scientific analysis which begins here, you do not need to.

Before even getting to the even worse nonsense about NNN itself, it is worth addressing CTP’s key premise here: They claim that ST causes enough cancer risk, specifically oral cancer, that reducing the quantity of the putatively carcinogenic NNN could avert a lot of cancer deaths.

Readers of this blog will know that the evidence shows ST use does not cause a measurable cancer risk. That is, whatever the net effect of ST use on cancer (oral or otherwise), it is not great enough to be measured using the methods we have available. That does not necessarily mean it is zero, of course. Indeed, it is basically impossible that any substantial exposure has exactly zero (or net zero) effect on cancer risk. But even if all the research to date had been high-quality and genuinely truth-seeking — standards not met by much of the epidemiology, unfortunately — there is no way that we could detect a risk increase of 10% (aka, a relative risk of 1.1) or, for that matter, a risk decrease of 10%. Realistically, we could not even detect 30%. For some exposure-disease combinations it is possible to measure changes that small with reasonable confidence (anyone who tries to tell you that all small relative risk estimates should be ignored does not know what he is talking about). But it is not possible for this one, at least not without enormously more empirical work than has been done.

Despite that, FDA bases the justification for the rule on the assumption that ST causes a relative risk for oral cancer of 2.16 (aka, a 116% increase), or a bit more than double. This eventually leads to their estimate that 115 lives will be saved per year. Before even getting to their basis for that assumption, it is worth observing just how big this claimed risk is. (I will spare you a rant about their absurd implicit claims of precision, as evidenced in their use of three significant figures — claiming precision of better than one percent — to report numbers that could not possibly be known within tens of percent. I wrote it but deleted it and settled for this parenthetical.)

A doubling of risk, unlike the change of 10% or 30%, would be impossible to miss. Almost every remotely useful study would detect an increase. Due to various sources of imprecision, some would have a point estimate for the relative risk of 1.5 (aka, a 50% increase) and some 3.0, but very few would generate a point estimate near or below 1.0. Yet the results from most published studies cluster around 1.0, falling on both sides of it.

You would not even need complicated studies to spot a risk this high. More than 5% of U.S. men use smokeless tobacco. The percentages are even higher, obviously, for ever-used or ever-long-term-used, which might be the preferred measure of exposure. This would show up in any simple analysis of oral cancer victims. With 5% exposed, doubling the risk would mean about 10% of oral cancer cases among nonsmoking males would be in this minority. A single oral pathology practice that just asked its patients about tobacco use would quickly accumulate enough data to spot this. It is not quite that simple (e.g., you have to remove the smokers, who do have higher risk) but it is pretty close. The point is that the number is implausible.

In Sweden, ST use among men is in the neighborhood of 30% (and smoking is much less common). A doubling of risk for any disease that is straightforward to identify, like oral cancer and most other cancers, would be much more obvious still. But no such pattern shows up. The formal epidemiology also shows approximately zero risk. Most of the ST epidemiology is done in Swedish populations, basically because relatively common exposures are much easier to study.

So how could someone possibly get a relative risk estimate of more than double?

The answer is that they created the absurd construct, “all available U.S. studies” and then took an average of all such results. (They actually used someone else’s averaging together of the results. They cite two papers that did such averaging and — surprise! — chose the higher of the results, though that hardly matters in comparison to everything else.) This is absurd for a couple of reasons which are obvious to anyone who understands epidemiologic science, but not so obvious to the laypeople that the construct is designed to trick.

You might be thinking that it is perfectly reasonable to expect that different types of ST pose different levels of risk. Indeed, that seems to be the case (however, the difference is almost certainly less than the difference among different cigarette varieties, despite the tobacco control myth I mentioned in Part 1, the claim they are all exactly the same). But nationality obviously does not matter. Should Canadian regulators conclude that nothing is known about ST because there are no available Canadian studies? This is like assessing the healthfulness of eating nuts by country; the difference is not about nationality but mostly about what portion of those nuts are peanuts (which are less healthful than tree nuts). If the category of nuts is to be divided, the first cut should be health-relevant categories of nuts, not nationality. Nutrition researchers and “experts” are notoriously bad at what they do, but few would make this mistake like FDA did.

The error is particularly bad in this case: It turns out the evidence does not show a measurable difference in risk between the products commonly used in the USA and those commonly used in Sweden. The data for all those is in the “harmless as far as we can tell” range. But it appears that an archaic niche ST product, a type of dry powdered oral snuff, that was popular with women in the US Appalachian region up until the mid-20th century, posed a measurable oral cancer risk. It turns out that a hugely disproportionate fraction of the U.S. research is about this niche product — disproportionate compared to even historical usage prevalence, let alone the current prevalence of about nil. There is nothing necessarily wrong with disproportionate attention; health researchers have perfectly good reasons to study the particular variations on products or behaviors that seem to cause harm. Also, it is much easier to study an exposure if you can find a population that has a high exposure prevalence, in this case Appalachian women from the cohorts born in the late 19th and early 20th centuries.

It is not the disproportionate attention that is the problem. The problem is the averaging together of the results for the different products. Even if that might have some meaning if the average were weighted correctly, it was very much not weighted correctly.

The 2.16 estimate was derived using the method typically called meta-analysis, though it is more accurately labeled synthetic meta-analysis since there are many types of meta-analysis. It consists basically of just averaging together the results of whatever studies happen to have been published. Even in cases the are not as absurd as the present one, this is close to always being junk science in epidemiology. The problems, as I have previously explained on this page, include heterogeneity of exposures, diseases, and populations, which are assumed away; failure to consider any study errors other than random sampling error; and masking of the information contained in the heterogeneity of the results. To give just a few examples of these problems: Two studies may look at what could be described in common language as “smokeless tobacco use”, but actually be looking at totally different measures of quite different products. Similarly, one study might look at deaths as the outcome and another look at diagnoses, which might have different associations with the exposure. A study might have a fairly glaring confounding problem (e.g., not controlling for smoking), but get counted just the same, obscuring its fatal flaw as it is assimilated into the collective. One study might produce an estimate that is completely inconsistent with the others, making clear there is something different about it, but it still gets averaged in.

But beyond all those serious problems with the method in general, all of which occur in the present case, this case is even worse. It is worse in a way that makes the result indisputably wrong for what FDA used it for; there is simply no room for “well, that might be a problem but…” excuses. It is easy to understand this glaring error by considering an analogy: Imagine that you wanted to figure out whether blue-collar work causes lung disease. This might not be a question anyone really wants an answer to, but it is still a scientific question that can be legitimately asked. Now imagine that to try to answer it, you gather together whatever studies happen to have been published in journals about lung disease and blue-collar occupations. As a simplified version of what you would find, let us say that you found two about coal miners, one about Liberty ship welders, one about auto body repair workers, one about secretaries, and two about retail workers. So you average those all together to get the estimated effect on lung disease risk of being a blue-collar worker.

See any problem there? If you do, you might be a better scientist than they have at FDA.

Obviously the mix of studies does not reflect the mix of exposures. Why would it? There is absolutely no reason to think it would. Notwithstanding current political rhetoric, only a miniscule fraction of blue-collar workers are in the lung-damaging occupations at the start of the list. The month-to-month change in the number of retail jobs exceeds total jobs in coal mining. But the meta-analysis approach is to calculate an average that is weighted by the effective sample size of each study, with no consideration of the size of the underlying population each study represents. The proper weighting could easily be done, but it was not in my analogy nor in the ST estimate FDA used (nor almost ever). If all the studies in our imaginary meta-analysis have about the same effective size, this average puts more weight on the <1% of the jobs that cause substantial risk than the majority that cause approximately zero risk. (Assume that you effectively controlled for smoking, which would be a major confounder here creating the illusion that even harmless blue-collar jobs cause lung disease, as is also a problem with ST research).

As previously noted, it is not only possible, but almost inevitable that studies will focus on the variations of exposures that we believe cause a higher risk. No one would collect data to study retail workers and lung disease. If they have a dataset that happens to include that data, they will never write a paper about it. (This is a kind of publication bias, by the way. Publication bias is the only one of the many flaws in meta-analysis that people who do such analyses usually admit to. However, they seldom understand or admit to this version of it.)

It turns out that this same problem is no less glaring in the list of “all available U.S. studies” of ST. In that case, about 50% the weight in the average is on the studies of the Appalachian powered dry snuff[*], which accounts for approximately 0% of what is actually used. Indeed, the elevated risk from the average is almost entirely driven by a single such study (Winn, 1981), which is particularly worth noting because this study’s results are so far out of line with the rest of the estimates in the literature. A real scientific analysis would look at that and immediately say that study cannot plausibly be a valid estimate of the same effect being measured in the other studies; it is clearly measuring something else or the authors made some huge error. Thus it clearly makes not sense to average it together with the others.

Note:
[*] As far as we can tell. The methods reporting in the studies was so bad — presumably intentionally in some cases — that they did not report what product they were observing. We know that the Winn study subjects used powdered dry snuff because she admitted it in a meeting some years later, and this was transcribed. She has made every effort to keep that from getting noticed in order to create the illusion that the products that are actually popular cause measurable risk. For some of the other studies we can infer the product type from gender and geography (i.e., women in particular places tended to be users of powdered dry snuff, not Skoal).

It is amusing to note what Brad Rodu did with this. Recall that the over-represented powered dried snuff was used by Appalachian women. So effectively Brad said, “ok, so if you are going to blindly apply bad cookie-cutter epidemiology methods rather than seeking the truth with scientific thinking, you should play by all the rules of cookie-cutter epidemiology: you are always supposed to stratify by sex” (my words, not his). It turns out that if you stratify the results from “all available U.S. studies” by sex (or gender, assuming that is what they measured — close enough), there is a huge association for women (relative risk of 9) and a negative (protective) association for men. ST users in the USA are well over 90% male. Brad has some fun with that, doing a back-of-the-envelope to show that if you apply that 9 to women and zero risk to men, you get only a small fraction of the supposed total cases claimed by FDA. And this is a charitable approach: If you actually applied the apparent reduced risk that is estimated for men, the result is that ST use prevents oral cancer deaths on net.

Notice that in my blue-collar example, you would also get a large difference by sex, with almost all the elevated risk among men. Of course, there is no reason to expect that sex has a substantial effect on either of these, or most other exposure-disease combinations. Results typically get reported as if any observed sex difference is real, but that is just another flaw in how epidemiology is practiced. The proper reason for doing those easy stratifications is to see if they pop out something odd that needs to be investigated, not because any observed difference should be reported as if it were meaningful. When there is a substantial difference in results by sex for any study where the outcome is not strongly affected by sex (e.g., not something like breast cancer or heart disease), it might really be an inherent effect of sex, but it is much more likely to be a clue about some other difference. Maybe it shows an effect of body size or lifestyle. Or perhaps the “same” exposure actually varied by sex. In the ST and blue-collar cases, we do not have to speculate: it is obvious the exposure varied by sex.

The upshot is not actually that when assessing the average effect, you should stratify the analysis by sex (though it is hard not to appreciate the nyah-nyah aspect of doing that). It is that averaging together effects of fundamentally different exposures produces nonsense. If there is a legitimate reason to average them together (which is not the case here), the average needs to be weighted by prevalence of the different exposures, not by how many studies of each happen to have appeared in journals.

It gets even worse. I put a clue about the next level of error in my blue-collar example: the shipyard welders worked on Liberty ships. In the 1940s, ship builders had very high asbestos exposures, the consequences of which were not appreciated at the time. Today’s ship welders undoubtedly suffer some lung problems from their occupational exposures, but nothing like that. Similarly, regulations and better-informed practices have dramatically reduced harmful exposures for coal miners and auto body workers. In other words, calendar time matters. Exposures change over time, and the effects of the same exposure often change too, with changes in nutrition, other exposures, and medical technology. There are no constants in epidemiology. (That last sentence, by the way, a good six-word summary of why meta-analyses in health science are usually junk.)

One of the meta-analysis papers FDA cites breaks out the study results between studies from before 1990 and after that. It turns out that the older group averages out to an elevated risk, while that later ones average out to almost exactly the null. This is true whether you look at just U.S. studies or studies of all Western products. Does this mean that ST once caused risk, but now does not? Perhaps (a bit on that possibility in Part 3). Some of it is clearly a function of study quality; I have poured over all those papers and some of the data, and the older ones — done to the primitive standards of their day — make today’s typical lousy epidemiology look like physics by comparison. A lot of this difference is just a reprise of the difference between the sexes: the use of powdered dry snuff was disappearing by the 1970s or so (basically because the would-be users smoked instead). In case it is not obvious, if you have a collection of modern studies that show one result and a smaller collections of older studies that show something different, you should not be averaging them together.

In short, a proper reading of the evidence does not support the claim that ST causes cancer in the first place. But even if someone disagrees and wants to argue that it does, that 2.16 number is obviously wrong and based on methodology that is fatally flawed three or four times over. That is, even if one believes that ST causes oral cancer, and even he believes it could even double the risk (setting aside that such a belief is insane), relying on this figure makes the core analysis that justifies this regulation junk science.

The next post takes up the issue of NNN specifically.

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (part 1)

by Carl V Phillips

I am a bit late to analyze this proposed FDA rule, which was promulgated on Inauguration Day. But it is still open for comments, and I will be submitting these posts (though for reasons I will get to shortly, these and all other comments are probably moot except as for-the-record background).

Before getting to the substance it is worth noting that this is really the first bit of genuine regulation proposed by the FDA Center for Tobacco Products (CTP) in its eight years. Despite CTP reportedly approaching $4 billion in cumulative expenditures, it has only implemented a few inconsequential rules that were specifically required by the enabling legislation, and has never actually created a standard or specific requirement like a real regulator. Instead, everything it has done has been what I have dubbed weaponized kafkaism. The variation on the word “kafkaesque” refers, of course, to Kafka’s horror stories of bureaucratic (in the pejorative sense) rules that create injustice via impossible procedural burdens. “Weaponized” refers to turning something that is harmful but not malign into a tool for intentionally inflicting harm. CTP has turned filing and paperwork hurdles into a weapon. Continue reading