Monthly Archives: July 2017

Simple Simon would refuse to meet the pieman

by Carl V Phillips

I prefer to write about science (or those pretending to do science, or those who cannot seem to report on it accurately) and the activities of government actors. I very much appreciate the efforts of those who take on blowhards who just pontificate their opinions, but it is just not my thing. However, every now and then, one of the blowhards says something that is really useful as social science. Case in point, a recent tweet by Australian Simon Chapman.

Many of you are familiar, and there is a flurry of recent posts about him because of the current attacks on e-cigarettes in Australia. Others who write about him often label him Simple Simon. Usually I try to avoid making jokes of people’s names, simple because it is always old material: I am sure that Chapman has been called “Simple” regularly since nursery school. But in this case it was too perfect a fit not to use. For those not familiar with Chapman, just imagine Donald Trump in a very small pond. It is a remarkably close analogy: the apparent seething directionless hatred which seems to be salved by political activism, the frequent assertion of obvious falsehoods, the urge to spend time on personally abusive trolling despite ostensibly being a professional and a grownup, and owing his (socially harmful) success to having impressive skills as a con man.

I will not bother to link the tweet because he blocks everyone who ever bests him at verbal sparring, which I am guessing is a lot of you. It was in the context of the Australian government’s consultation (request for comments and inputs) on their e-cigarette policy, which basically comes down to the question of whether to maintain the current ban. He was arguing (to use the term loosely) that manufacturers and merchants should be excluded, because, according to him:

Police don’t meet with drug dealers either.

Here is a screencap (credit for that and for inspiring this post to @K_d_a7):

What makes this piece of pontification interesting is not just that it is so obviously wrong, but what it says about Chapman and his ilk.

Of course, police meet with drug dealers. I am not talking about interrogating them or trying to get them to flip on their colleagues — I’ll give Chapman the benefit of the doubt that he meant to exclude such events (though this is charitable, since he did not say so). Anyone who works on the social science side of public health issues that run up against policing — like, say, a sociologist who works in tobacco control — will have read interesting accounts of such meetings. Well, will have read them if he has an agile mind and actually reads, rather than spending all his time pontificating and trolling. My colleagues who focus on fully banned drugs know the details better than I do, certainly, but it is hard to not be generally familiar.

But here’s the thing: You could figure it out even if you were completely unaware of the history of police meeting with drug dealers. A passing familiarity with the human condition would tell you that it must happen. The only time a faction in a political/economic/military/whatever struggle would never want to talk to another faction is if they had absolutely no common interests. This is the case approximately never. Even states engaging in total war still share an interest in, e.g., POWs not being tortured and executed and in letting civilians escape from besieged cities. Only players in two-player board games have no common interests and nothing to talk about. Once you move beyond such games to something that is merely as real-world as a football match, the players and teams have shared interests, such as agreeing to not let the match degenerate into an injury-producing brawl.

Some drug dealers might want to get out of the business. Even if a cop would rather have a charge stick and make them a burden on the state for twenty years, he might still settle for helping them get out (and a good cop would prefer the latter, of course). The police might have an opportunity to encourage dealers to engage in better business practices, such as reducing violence — anything from heading off a gang war to reducing everyday threats to innocent bystanders. They would do this even though this might not reduce the flow of drugs. Indeed, they might do this even if they were fairly sure it would increase the flow, since they do not share tobacco control’s obsessive and antisocial priorities. If the police discover that a deadly batch of a drug is on the streets, they should (and presumably usually do) alert the relevant dealers they know.

All of these represent examples of the shared goals that inevitably exist even between factions who are mostly adversaries. They also have some fairly obvious analogies in the shared goals of the tobacco industry and (real) public health. Only someone with a “death to them all!” attitude — i.e., not someone who really cares about public health, or human beings for that matter — would disagree. We are talking ISIL or Duterte levels of inhumanity. Of course, tobacco controllers have praised both ISIL and Duterte, so that is who we are dealing with.

Chapman is notorious for declaring that anything that tobacco companies object to must be good — he calls it “the scream test”. Who knows whether he actually believes this, or whether he believes anything for that matter — see aforementioned comparison to Trump. In particular, he absurdly concluded that because major tobacco companies object to brand confiscation (plain packaging), citing as reasons that it aids black marketeers and increases sales of cheaper non-name-brand products at their expense, it means that it must be a good policy. Um, yeah. By his logic, since people at tobacco companies are opposed to an asteroid collision exterminating humanity, then that too must be a good thing.

As if to assist my analogy, I ran across this tweet while writing this post, about the discourse in American politics:

Anyway, the point here is not to observe that Chapman tweeted something that is obviously false. It is not as if that list needs any additions. As with Trump, it is basically an everyday occurrence, saying something he thinks is clever but that just demonstrates he does not understand the world. Well, it demonstrates that to most people — the two men’s bases, which have similar levels of thoughtfulness, presumably eat it up.

But as with Trump or Pravda, there is often something to be learned from content even when you know the goals of the author did not include conveying honest information. In this case, it is insight into tobacco controller’s tendency toward an expansive version of the mirror-image delusion. As I discussed in more detail here, the mirror-image delusion describes people with limited insight who assume that whoever they are looking at across a table is just like them. This has come up in the contest of the Australian consultation, with tobacco controllers accusing everyone who submitted an objection to the ban on e-cigarettes of being a paid shill. Others have written about the obvious irony of people who only act when they are getting paid accusing honest people — who just want the right to control their bodies and use a preferred alternative to cigarettes — of being shills. Chapman’s comment makes clear that his ilk are similarly deluded about everyone’s motives.

They really think they are playing a two-player winner-take-all board game against the tobacco industry, or perhaps against the genus Nicotiana itself. They often appear oblivious to the primary stakeholders, the consumers. But it is really worse than that. If it is a two-player winner-take-all game, then those consumers are actually the enemy. (I explore that theme further in what I consider my best post ever.)

That is not a new insight. But it might mean something somewhat different when considered in the context of thinking everyone in the world is their mirror image. In their mind, everyone acts as if the world were a board game. Cops do not meet with criminals because they have no common interests. There are no peace or disarmament talks. Political caucuses do not negotiate deals with each other. Those doing real public health work — health inspectors and advocates who wish to improve nutrition — never talk with the pieman. Tobacco controllers did not negotiate deals with the major American cigarette companies to create the mutually-beneficial Master Settlement Agreement and Center for Tobacco Products.

Of course that last bit reminds us that tobacco control attracts a certain kind of grifter, who just sees it as a good way to make money (not just in the USA — see also: ASH; also, recall the Trump analogy). But the people who those grifters let serve as their public face are a special bunch for another reason. They are people who never grew beyond the male adolescent worldview that the world is all about two-player board games. They are not just pretending to believe that everyone thinks that way. They give commendations to Duterte not because they are setting aside his ruthless behavior in policing — though that alone would be unforgivable — but because they admire and envy it (note Trump analogy again, though I doubt it is too subtle here). That, to them, is the right way to run a police force, not with all that silly treating all people as human beings who are worthy of concern like this loser.

Needless to say, I am not gleaning all this from one tweet by one serially abusive troll. It is based on a pattern of observation and was merely made stark by that tweet. The fact that tobacco control is a comfortable home for those who genuinely feel that cops should just be executing the “bad guys” is yet another reason why real public health needs to disavow tobacco control just like they do other drug warriors.

(update) Postscript: Saw this ten minutes after posting. Presented without further comment:

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (postscript)

by Carl V Phillips

For completion of this series (with this footnote), the following is what I submitted to FDA. My comment does not yet(?) appear on the public docket as of this writing. But I got a confirmation (conf code 1k1-8xfb-dhwh if you want to search for it later). It has a bit of extra content beyond what I already presented.

I know a few of you urged me to rewrite my analysis in a more, er, formal manner. While I understand their reasoning for doing so, I chose not to take time from my other obligations to do that.  I honestly think it does not make any difference. I am reasonably confident that FDA “fulfills” their obligation to consider all the comments by having a low-level staffer read on each one, without reporting anything of substance up the chain, so they can check a box that says they read and considered each of them. If this proposed rule is not withdrawn for political reasons or as a result of the various procedural problems, then whoever is pursuing a lawsuit to strike it down can enjoy my essays as they mine them for substance. (Shameless plug: Of course, if they would like to hire me to formalize anything, I am quite good at that.) Besides, I might manage to embarrass that staffer who reads it into going into a more honorable line of work.

The content follows:


The primary purpose of this comment is to demonstrate that FDA’s assessment of the supposed benefits of this rule (115 fatal cancers averted per year) is fatally flawed for approximately half a dozen reasons, each one of which is sufficient to invalidate it. I have published the analysis in the following three blog posts, which I incorporate into this comment by reference:

https://antithrlies.com/2017/06/26/fdas-proposed-smokeless-tobacco-nitrosamine-regulation-innumeracy-and-junk-science-part-1/

https://antithrlies.com/2017/06/29/fdas-proposed-smokeless-tobacco-nitrosamine-regulation-innumeracy-and-junk-science-part-2/

https://antithrlies.com/2017/07/02/fdas-proposed-smokeless-tobacco-nitrosamine-regulation-innumeracy-and-junk-science-part-3/

(I have also attached printouts of them for completeness, but I would suggest reading the online versions with live links.)

The implication of that analysis is that there is no scientific basis for claiming that any disease incidents will be prevented by this rule, let alone the specific quantity claimed by FDA as the rule’s justification. Based on this alone, the rule should be withdrawn.

This analysis should not be interpreted as implying that if, counterfactually, the 115 figure were actually science-based, then it would justify the rule. There is no analysis of the negative health impacts from driving smokeless tobacco users to smoking when their preferred products are banned. The absence of this analysis is another sufficient reason for withdrawal of the rule. Moreover, even there were a legitimate reason to believe there were health benefits, and even if there were no health costs, justifying this rule would require a cost-benefit analysis that considered the welfare loss to consumers and other costs. The absence of this analysis is yet another sufficient reason for withdrawal of the rule.

Finally, given the lack of cost-benefit analysis of any sort, there obviously is no justification for choosing the particular quantitative standard in the proposed rule (even apart from the fact that it appears to be 1/4 of the intended quantity). This makes the choice of the standard arbitrary and capricious. It appears it must have been chosen with an eye to which particular winners and losers it would create, as I presented in this footnote to the previous analysis here (incorporated into this comment by reference and also attached):

https://antithrlies.com/2017/07/09/sunday-science-lesson-toxicology-and-the-chains-in-american-football/

While not central to the main point of this comment, this is a further problem with the legitimacy of this rulemaking.

Sunday Science Lesson: toxicology and “the chains” in American football

by Carl V Phillips

Those of you who read my series on fatal flaws in FDA’s proposed rule about limiting the nitrosamine NNN in smokeless tobacco (and presumably anyone reading this quick little tangent read those important and carefully crafted posts) might have tripped up over an oddity from the third post in the series. I quoted this from FDA’s proposed rule about how their key number, used for estimating the risk of cancer caused by some quantity of NNN, was calculated:

As defined by the EPA guidelines, the cancer slope factor (CSF) is “an upper bound (approximating a 95percent [sic] confidence limit) on the increased cancer risk from a lifetime exposure to an agent.

I noted (you can read the original for more detail) this means that when FDA estimated the dose-response for NNN, they did not use the point estimate generated by the underlying study, but inflated it by an arbitrary fudge factor (which is not actually an upper bound, as claimed, but is still much higher than the point estimate). This is obviously an error. There are arguments that using such inflation factors when setting standards (e.g., how much of a potentially toxic substance a facility is allowed to emit) are appropriate, to err on the side of caution. But an inflation factor, creating a number higher than what the data suggests is the best estimate, obviously does not give us the best estimate for the actual dose-response. I also observed that the model used to translate the data from rodent megadose studies into an estimate for the effects of realistic human exposures was fraught with huge, undoubtedly incorrect, assumptions that made the final result nearly worthless, even apart from this.

So you might be asking why such lousy models and arbitrary fudge factor rules even exist. They are clearly grossly inappropriate for what FDA was doing — no ambiguity there. But presumably they serve some purpose, or they would not exist.

I found myself flashing back to when I was ten or twelve years old and a fan of American football. There is a process in that game that occasionally occurs, in which a very close judgment has to be made about whether the offensive team advanced the ball the required ten yards to get a “first down”. (That is all you need to know. Obviously most American readers will know more details. Also, I realize I do not even know whether what I describe is still done at professional levels, given that it could be replaced by imaging and computers, but is at least still presumably done in high school games.) At that point, two officials run in from the sidelines carrying “the chains”, a pair of posts connected by a ten-yard chain. One of them places one end at the starting point for the required ten yards of progress. The other then pulls the chain taut and observes whether the current placement of the ball is a little past his post or a little short of it.

You might wonder why. It is no easier to identify the exact starting point, and measure from it, versus just identifying the exact ending point needed for the first down. Since the play will usually have moved the ball sideways, it is not as if someone can just remember the exact blade of grass the ball was on at the start; it is necessary to eyeball the corresponding point on an imaginary line across the field. Also, the ball is not a single point. And the current placement of the ball after the last play was somewhat arbitrary too. So why not just eyeball the spot that is ten yards further (using as a guide, in either case, the markings of yards that are painted on the field, though not necessarily exactly on the line the ball is on)? Further contemplation reveals that the answer lies in game theory.

If one official has to eyeball a target point near where the ball is sitting on the ground, either just ahead of it or just behind it, he is full-on deciding whether to award the first down. That creates a huge amount of pressure and also creates an enormous potential for exercise of any bias that official is feeling for whatever reason. It could be nefarious bias. But it could be an innocent moral struggle such as, “I denied them the last close call that could have gone either way, so I owe them this one that could go either way.” Or it could be an attempt at beneficence in violation of the procedural rules like, “where the ball is sitting is short of my estimated spot, but my colleague who decided where to place the ball after the last play really should have put it further forward and I can fix his error.” But when the first official eyeballs his spot ten yards back, he cannot be sure whether an inch one way or another even matters and can just do it mechanically without all those inconvenient thoughts cropping up. Of course, the colleague could exercise nefarious bias when he chooses where to pick his spot; an inch forward or an inch back are both plausible estimates of the starting point. But the complicated mechanism reduces the temptation to exercise such bias somewhat, and strongly reduces effects of the “I owe them this one” or “I can fix it” factor.

Regulators setting an allowable level of potentially harmful effluent, contaminant, or ingredient also have to draw a line. The right place to draw the line is hugely uncertain, both in terms of what levels are actually harmful and the political decision about what level of harm should be allowed (this contrasts with the American football analogy). Getting it right is pretty much impossible. Still, issues like those facing the football referee can be avoided. If regulators are allowed to draw the line when looking at exactly where the ball is sitting, as it were, they are deciding such things as “this product is fine, but its leading competitor is banned,” or “the facilities operated by our boss’s biggest campaign donor all just squeak in under the line.” That would not be good.

So instead they create a rule that says “make an estimate based on this crazy dubious model and then inflate the result by this predefined arbitrary factor, and draw a line based on that.” This does not eliminate directional bias (intentionally trying to be more or less stringent) in defining the models or inflation factors, or in interpreting the underlying data. But it does help avoid someone saying “hey, if I just bump this limit down from 7.5 to 6.8, I can really stick it to that company that I have always hated.” Since the proper line is enormously uncertain, that would be easy to do.

For the same reason, it does not matter so much that many of the steps in the defined process are just silly. You can still get outcomes where experts largely agree that the standard spit out by the sketchy complicated (but well-defined) process is way too low or too high. But even then, at least it offers a starting point for debate that was not just someone capriciously making up a number. Most of the time, the genuine uncertainty is sufficient that the result of the process might actually be the optimal number.

Circling back to the FDA, it is worth noting that their proposed rule in no way resembles this clumsy, but arguably justifiable, process. They were not following a rule that spit out a quantitative standard that, while probably non-optimal, was at least non-arbitrary. No, they misused elements of this process to (inaccurately!) estimate the effects of their proposed standard. But their standard itself was still an arbitrary and capricious number that was pulled out of the air. This was done with the clear view of exactly which products would make the cut, which would have to be re-engineered, and which would be banned. This is exactly the bright-line decision about who wins and who loses that those football and normal regulatory rules are designed to prevent.

Well, I should say that FDA thought they had a clear view of exactly which products would be affected. As noted in the first post in my series, they actually made a factor-of-four arithmetic error that means far more products would be affected and far more banned than they intended. But the point is still that they were misusing the trappings of a process that is designed to avoid exactly such picking-and-choosing, while still trying to engage in arbitrary picking-and-choosing.

FDA’s proposed smokeless tobacco nitrosamine regulation: innumeracy and junk science (part 3)

by Carl V Phillips

In Part 1 of this series, I described FDA’s proposed rule that would require smokeless tobacco products (ST) to have no more than 1 ppm of NNN (a tobacco-specific nitrosamine or TSNA) dry weight. I discussed some of the political and policy implications of this, and reasons why the rule will probably not survive. I also noted that almost no current products meet that standard, and that American-style ST probably cannot meet it. Despite the proposed rule probably being mooted, I noted there is still value in examining just how bad the ostensibly scientific analysis behind it is. In Part 2, I noted that the FDA’s estimate the standard would save 115 lives per year is premised on their estimate for the risk of oral cancer caused by ST use. But, in fact, the evidence does not support the claim that ST use causes any oral cancer risk. I then focused on why, even if one believes there is some such risk, the method used to calculate FDA’s quantitative estimate is utter junk science.

So far, none of that has addressed NNN itself, and how meeting the NNN standard would affect the carcinogenicity of ST, if it is carcinogenic. It turns out that this part of FDA’s analysis is even worse than that discussed in Part 2.

Estimating the health effect of a quantitative standard for an exposure is a matter of estimating the relevant range of the dose-response curve, along with knowing how much people’s dosage would change. That is, you need estimates like, “N people use product X, which has 5 ppm NNN, which causes Y risk per person, versus the Z risk per person from 1 ppm, so multiply N by (Y-Z)….” With such numbers we could estimate the effect of an adjustment in the NNN concentration.

In reality, it is not that simple. In Part 1, I pointed out that most products could not just have their NNN concentration “adjusted” like that, and that they would have to be fundamentally changed, effectively eliminated and replaced in the market (perhaps if FDA had not made the arithmetic error noted in Part 1, that would only be “some” rather than “most”). Many consumers of the eliminated or fundamentally altered products would not be happy with the new option. Some would just quit, eliminating the Y risk as well as any other risk from using the product (setting aside that as far as we know are both nil; remember, we are down that rabbit hole here). Some would switch to smoking, creating a risk that is orders of magnitude greater than anything discussed so far, making all of the details moot: the net health impact would be an increase in risk.

But that is the simple practical criticism of this madness, one that hinges on questions of consumer behavior (an area where FDA’s analyses are consistently absurd, but they always manage to trick their audience into accepting their assertions). That is not what I am doing here, though I suppose I just did it in one paragraph. My goal is to point out that the FDA core claims about benefits here are based on junk science, setting aside the enormous costs that would dwarf them anyway. So returning to my point here, what basis do we have for estimating Y, Z, and other points along the dose-response curve?

None.

Absolutely nothing.

Indeed, we do not even know that NNN in ST affects cancer risk at all.

As I mentioned in Part 1, if you are only familiar with the rhetoric about this topic, and not the science, you would be forgiven for not knowing that the assertion there is any such effect is based only on heroic extrapolations and assumptions. You might further surmise that since FDA claims that this reduction would reduce cancer deaths by 115 per year (note: not “about 100”, but as precise as 115), there is not only evidence that NNN in ST causes cancer, but there is also so much evidence that we can precisely estimate a dose-response.

What we know about NNN and cancer is based on biological theory (we have evidence that some nitrosamines cause cancer in humans), and the effects of exposing rats, hamsters, and other critters — species whose propensity to get cancer from an exposure is often radically different from ours, and even from one another’s — to megadoses of NNN. Those toxicology studies do suggest that NNN exposure probably causes cancer in humans, in a big enough dose, and under the right circumstances. Of course, that is also true for almost everything. When IARC, the cancer research arm of the WHO, made their blatantly-political decision to declare NNN a known human carcinogen, they did so in violation of their own rule that there has to be some actual human exposure evidence before making such a declaration. There is not. But even if someone believes that NNN in ST does cause cancer in humans, the rodent megadose data obviously does not tell us anything about the effect of the reduction in dosage imposed by this rule.

Stepping back, it is useful to understand the potential legitimate use of toxicology studies like those. They — or, better, in vitro studies of cells that are actually similar to the human body and do not require sociopathic torturing of innocent animals — are useful for giving us a heads-up that a chemical or combination of chemicals might be carcinogenic or poisonous. This might be a good reason to undertake the more difficult search for epidemiologic evidence that the real-world version of the exposure is causing the bad outcome. Or at least a reason to pursue the in-between step of looking for biological evidence of harm from the real-world exposure in humans. It might even be sufficiently compelling to prohibit introducing a novel exposure, acting before we can even get any human data.

If toxicology studies of a chemical all fail to produce a bad outcome, this strongly suggests that the exposure will not cause the harm, so long as that failure is consistently confirmed using various toxicology methods (claims that a single toxicology study shows that an exposure is harmless, which are currently appearing in the pro-vaping rhetoric, are misguided). But getting a bad outcome in a particular toxicology study does not mean that the real-world exposure actually does cause harm. The pattern in the toxicology has to be far better than what we have for NNN before such a conclusion is justified, including getting the effect at reasonably realistic exposure levels and fairly consistently across a variety of methods.

Consider an analogy: We are interested in knowing whether there is life on other planets, but actually going there to take a look is rather difficult. We have a much cheaper tool in our toolbox, however, which is to use modern telescopes to see if light scatter suggests a water-rich atmosphere. Of course, that is far short of observing life; it would be insane to say “we saw evidence of water, so there must be life there!” But since the versions of life that we understand require there to be enough water, seeing that creates the intriguing possibility of life. Failing to find water tends to rule out the possibility of life as we know it.

Another legitimate use of toxicology is to tell us why an exposure is causing harm. Of course, this should mean there is evidence of harm, not just some wild assumption that there is harm. Continuing the analogy, pretend that someone looked at the light scatter around Mars and claimed they saw enough water to support life: “Aha, this shows that the canal-building civilization is water-based life as we know it.” Um, but you do know that early 20th century telescopes debunked that 19th century canals myth, right? Also we have had numerous close observations of the planet and little labs driving around on the surface. Your hint about the possibility of life is utterly pointless given that we have much better information about the reality.

I have often described the TSNA toxicology research, which inexplicably continues to this day, as an attempt to identify which chemical pathways cause a cancer outcome that does not actually occur. As with Mars not having canals, we know that ST use does not cause a measurable risk for cancer, and therefore the NNN and other TSNAs in ST are not causing a measurable risk (unless we think that other aspects of the ST exposure prevent exactly as much cancer as the TSNAs cause, something that no one is seriously proposing). One possibility that has been seriously proposed — e.g., by Brad Rodu, whose work I cited in Parts 1 and 2 — is that something else in ST, perhaps antioxidants, directly negates whatever cancer-causing effect the TSNAs might have if we were exposed to them alone (which does not happen at a level beyond a few stray molecules). Indeed, when the exposure is tobacco extract, those rodent studies fail to show the carcinogenic effect from NNN, or anything else in ST for that matter, a fact that is conveniently glossed over.

So how did we end up with the “fact” (which I suppose should be called the fake news in current parlance) that NNN and other TSNAs in ST cause cancer? It basically comes down to circular reasoning, or perhaps it is figure-eight reasoning since there are two circles as well as a few other fallacies. It goes something like this (and I am really not exaggerating):

“Given that we have only seen an effect in megadose rat studies, how can we really be sure that TSNAs at the relevant dosage and in a realistic exposure cause cancer?”

“Because smokeless tobacco causes cancer, and it contains TSNAs.”

“But [even setting aside that we do not know that is true] how could you know it was the TSNAs causing it.”

“Because we know TSNAs cause cancer.”

“Um, isn’t that so transparently circular that even tobacco control’s useful idiots will see right through it?”

“There is more. We know that higher-TSNA products cause more cancer risk.”

“Ah, now that sounds like actual evidence. Please explain.”

“US products have higher TSNA levels than Swedish products, and US studies show a cancer risk while Swedish studies do not.” [Note: see appendix to this dialogue, below.]

“But didn’t you read Part 2 of this series? That contrast does not appear in studies of modern US products, but only from a few studies of an archaic type of product.”

“Yes, exactly. That product was very high in TSNAs, and its cancer effects were off the charts compared to modern products. Case closed.”

“There are no measurements of the TSNA levels of those archaic products. How do you know they had high TSNA levels?”

“Isn’t it obvious? They must have, because they caused cancer and TSNAs cause cancer.”

Loopity loopity loop.

In fairness, there are honest observers, including Brad Rodu, who hypothesize that this is indeed the reason the archaic products apparently caused cancer. But this is just a hypothesis, and it cannot be tested. Indeed, we cannot even replicate the basis for claiming those products caused cancer in the first place. It basically comes down to a single study from the 1970s — not exactly overwhelming evidence.

A bit more useful background: In the 2000s, the anti-ST crusaders in and funded by the US government (CDC and NCI, before FDA joined the game) fought a rearguard action against the evidence that had emerged from Sweden that ST was approximately harmless. Part of this was insisting that the higher levels of TSNAs in US products meant that the Swedish evidence was not informative. It was political bullshit on its face. Still, I wrote an analysis over a decade ago that showed that the ST products that produced those null results in Sweden had about the same TSNA levels as then-current US products. (This was based on limited analytic chemistry from before 2000. There were only a handful of TSNA concentration studies in the public record. But there was enough to show this.) TSNA levels in all styles of ST products were and are decreasing over time. It might have been true that 1990 US products were materially more hazardous than 1990 Swedish products (which showed no measurable risk) because they had higher TSNA levels. But mid-2000s US products had low enough TSNA levels that this would have no longer been true. This leads to the appendix for the dialogue. We could imagine this variation:

“US products have higher TSNA levels than Swedish products, and US studies show a cancer risk while Swedish studies do not. Also there is a time trend, wherein TSNA levels have been dropping in both US and Swedish products, and older studies found elevated cancer risks, while newer ones do not.”

“Part 2 of this series dismisses your first sentence. But the second sentence makes some sense, though it might just be because the older studies used really primitive methodology. Still, you have a prima facie valid point there, unlike all your other complete bullshit. But, hey, doesn’t that also mean you are conceding the fact that modern ST products do not cause any measurable cancer risk, even if older products might have?”

“Er, no. We never said that. We never made any claim about time trends despite it being the most scientifically defensible argument we have. Strike all that from the record.”

Summarizing this, we have only unsupported hypotheses and circular reasoning behind the claim that NNN in ST causes any of the (quite possibly zero) cancers caused by ST. Given this, we obviously know nothing about how much cancer a particular concentration of NNN causes. That is sufficient to show that FDA’s claim cannot possibly be science-based. But I am sure you share my curiosity about how FDA took this complete lack of information and turned it into the conclusion that exactly 115 lives per year would be saved by this regulation.

Here it is (from the proposed regulation):

….increase in oral cancer risk of 116 percent among smokeless tobacco users compared with never users. We then reduce this value by 65 percent based on toxicological evidence relating the estimated average reduction in the dose of NNN to lifetime cancer risk under the proposed standard. The result is a reduction in the estimated relative risk of oral cancer to 1.41 under the proposed product standard. FDA used the following calculation: (1 + (2.16−1) × (1−0.65) = 1.41) for this determination.

Thanks, guys, for showing us how to do that arithmetic so I did not have to find a third grader to ask. The important bit of showing their work, of course, is about justifying the inputs. In the introduction, FDA refers the reader to section IV.C for the basis for the .65 figure. It is really section IV.D, because, hey, just because you spent a million dollars writing a regulation that is potentially devastating for industry and millions of consumers does not mean you should bother to have someone edit it. It turns out the assumption is that the dose-response is linear across all quantities, and under that assumption the effects observed from megadoses in rodents gives a dose-response that translates into .65. The generic problems with this include the fact that the linear (also known as “one hit”) model of carcinogenesis has long-since been dismissed as invalid, the folly of extrapolating orders of magnitude beyond the observed data, and the little matter that rodents are not people.

It gets worse still when you look at the equation that FDA used to calculate the fictitious linear trend. (And I am not referring to the fact that they actually cut-and-pasted the equation in their document as an image from some low-res PDF of someone else’s document. This is not a scientific flaw, of course, but, it does suggest the proposed rule was written by people who have so little education and experience in science that none of them had ever learned how to typeset a simple equation.) The equation builds in the assumption that a very high exposure for a short time (e.g., what the rats experienced) has the same effect as the same total exposure stretched out over many years. This is the linearity assumption taken to the extreme. It not only assumes linearity for each parameter — i.e., increasing years of exposure, increasing quantity per exposure, or increasing number of exposures per day by Y% increases risk by Y% — which is completely unsupportable and almost certainly wrong. It also assumes a multiplicative effect for all interactions, which is also unsupportable and almost certainly wrong. For those who did not follow that, I will explain its major implication: The assumption is that a given lifetime quantity, X, of NNN exposure creates the exact same total cancer risk whether it is consumed all in one day, or one month, or spread out over 70 years. It is the same whether an ongoing exposure takes place all at once each Monday morning or it is spread evenly throughout the week. Moreover, if you increase X by 10% it increases the risk by 10% no matter how the consumption is spread out. On top of all that, if someone’s body mass is 10% lower his risk from X is always increased by 10%. If his mass is 99.963% lower (i.e., he is a hamster and not a human) then the risk is increased exactly 2720-fold.

Such simplifying assumptions about linearity and multiplicativity are not terrible if you are interpolating (i.e., you have data from both sides of the quantity you are assessing and you are trying to fill in the middle) or are extrapolating a little bit beyond the range of your data. But in this case they are extrapolating orders of magnitude beyond the rat data. Weeks of exposure rather than decades, 30 g bodies rather than 75 kg, and crazy large doses. And, of course, there is the little matter of assuming that a different exposure pathway in a different species has the same effect of ST exposure in humans. The huge extrapolation means that the slightest departure of the assumptions from reality (and it is safe to say that the departures are more than slight), means that the final estimate is complete garbage.

It gets worse. The key parameter is what is multiplied by the total lifetime units of exposure in order to estimate risk, which FDA calls the “cancer slope factor” or CSF if you want to search for it in the document. For this, they rely entirely on a 1992 estimate from the California EPA, which itself was based on the results of a 1983 paper that looked at what happens when hamsters were given huge doses of NNN dissolved in their drinking water. Yes, really. FDA’s number ignores the ~99% of the relevant research that has been done in the last three decades, and it was obviously pretty sketchy even in 1992 given that it was based on a study whose real information value (about actual human exposures) was approximately nil. Moreover, there is this:

As defined by the EPA guidelines, the cancer slope factor (CSF) is “an upper bound (approximating a 95percent [sic] confidence limit) on the increased cancer risk from a lifetime exposure to an agent.

So apparently (the methods are reported so poorly that it is hard to be certain) they not only based this key number on evidence — to use the word rather loosely — from a single ancient toxicology study, but they did not even use the actual estimate that was generated from that. Rather, they used a larger number generated via an arbitrary process. The upper bound of a 95% confidence interval is a completely meaningless number in this context. There is an argument (which many would call dubious) that some arbitrary inflation of the point estimate like this should be used in “abundance of caution”-based regulations. (Update: More on this in my follow-up post.) But it is not an estimate of the actual effect. I know this seems like an arcane technical point in the context of everything else, but I cannot stress enough what an enormous failure of legitimate science this is (assuming they did what it sounds like they did). This would mean, for example, if there had been fewer observations collected in that 1983 study, but it had still supported exactly the same point estimate, FDA would be claiming some larger number of lives saved, like 125 per year rather than 115.

When presenting this number, and practically admitting it is junk (despite using it to calculate their estimate of 115 to three significant figures), FDA writes:

FDA welcomes public comment on whether there is a more robust CSF available for NNN.

This is a classic bit of anti-scientific rhetorical strategy. Anyone answering that question as phrased is implicitly conceding that the estimate FDA used has some validity. Respondents are effectively conceding that if they cannot make a compelling case that some other number is better, then FDA’s number was appropriate to use. When a question’s phrasing builds in invalid assumptions, or when it assumes away the really important questions (“Have you stopped beating your wife?”), the response needs to unask it, not answer it. So here is my unasking answer to their welcoming of public comment:

The number FDA used has absolutely no hint of validity. However, there is no robust, or even remotely plausible basis for generating this “CSF”; any number used here might as well be made-up from thin air. That said, given that ST does not seem to cause oral cancer in the first place, the best default estimate is zero. There is no legitimate basis for concluding an estimate of zero is wrong. Oh, and also if you are going to use a junk-science extrapolation from rodent studies, you should at least calculate this number based on all such studies to date. If you are not capable of doing that analysis, and instead are limited to using the approach any middle-school student would use if confronted with this question (run a search and blindly transcribe whatever someone once wrote), then you have no business regulating anything!

I’ll take a deep breath here, because that is still not all. Look back at that grade-school arithmetic they showed us. Notice any assumptions embedded in it? Yes, that’s right, they assumed that all the cancer risk that they claim is caused by ST is caused by NNN, and thus a .65 reduction in the risk from NNN exposure is a .65 reduction in total risk. Wait, what? FDA did some hand-waving in their document about reductions in NNN also carrying along reduction in another TSNA, NNK, but they never tried to justify the claim that the (supposed) cancer risk was all due to NNN or even NNN plus NNK. How could they?

Effectively, FDA has just declared that they believe that whatever the cancer risk (at least oral cancer risk) is caused by ST consumption, it is all caused by TSNAs and no other molecules contribute any cancer risk. They never suggested this was a simplifying assumption. This could have some amusing implications. The next time you see one of those anti-scientific bits propaganda about ST containing 27 carcinogenic chemicals (or whatever number they are making up that day), you can reply that FDA has declared that at least 25 of those do not actually cause cancer. On the other hand, we should probably not try to push this too hard on this. I am guessing that, given all the other errors, the authors of this rule did not understand their own arithmetic sufficiently to know they were implicitly declaring this to be true.

Returning to the life on Mars metaphor, and the dialogue motif, the “logic” behind the FDA analysis would map to something like the following:

“From my light-scatter observations, I have concluded that had the water density in the martian atmosphere been X, instead of the Y I observed, the civilization that built the canals would not have collapsed just after helping humans build the pyramids, but would have thrived for 1,150 more years.”

“Wait, what? There are no canals. There was no civilization. Ancient extraterrestrial visitation stories are just silly claims by people who do not understand science and technology. The rovers and other Mars exploration have already shown that if there is or was anything we might call life, it has had no perceptible impact, let alone built a civilization. There is not enough water to support an ecosystem now, and was not enough 5000 years ago. But even if there had been a civilization, there is obviously no basis for estimating how atmospheric water density affected it, let alone a way to predict its demise to three significant figures based on one observation. As a minor point, I am not sure from what you said whether you meant Mars years or Earth years, but I am guessing you do not even know they are different.”

I am not being hyperbolic when I say FDA’s proposed rule comes across as parody. It reads like someone concocted it in order to ridicule a collection of faulty common practices and reasoning in public health science, creating cartoon versions to highlight problems that are often subtle. Please reassure us, FDA, that this was intentional. Even more so, those of you at the Center for Tobacco Products might want to reassure your colleagues elsewhere in FDA that this is not what their once respectable agency has come to.

Alternatively, perhaps it was really a joke by outgoing officials, hoping for a *popcorn* moment when the new administration tried to defend the rule in court. Or maybe it was just a Dadaesque tribute to the day it was issued. I realize these do not seem like terribly likely explanations, but they are more plausible than believing that anyone with a modicum of scientific expertise thought that this hot mess was legitimate analysis.