by Carl V Phillips
I am a bit late to analyze this proposed FDA rule, which was promulgated on Inauguration Day. But it is still open for comments, and I will be submitting these posts (though for reasons I will get to shortly, these and all other comments are probably moot except as for-the-record background).
Before getting to the substance it is worth noting that this is really the first bit of genuine regulation proposed by the FDA Center for Tobacco Products (CTP) in its eight years. Despite CTP reportedly approaching $4 billion in cumulative expenditures, it has only implemented a few inconsequential rules that were specifically required by the enabling legislation, and has never actually created a standard or specific requirement like a real regulator. Instead, everything it has done has been what I have dubbed weaponized kafkaism. The variation on the word “kafkaesque” refers, of course, to Kafka’s horror stories of bureaucratic (in the pejorative sense) rules that create injustice via impossible procedural burdens. “Weaponized” refers to turning something that is harmful but not malign into a tool for intentionally inflicting harm. CTP has turned filing and paperwork hurdles into a weapon.
It is bad enough when sloppiness, laziness, and incompetence create cost, inefficiency, opaque or even impossible requirements, and uncertainty. But in this case, those results — throwing sand in the gears of the regulated industry, making whatever they and their customers want to do difficult and uncertain — are the goals of the agency. Sloppiness, laziness, and incompetence tend to cause kafkaesque burdens to pile up if no effort is made to push back. They also perfectly camouflage the malevolence of intentionally created burdens.
The march toward a near-ban of e-cigarettes is an example of this. Products will not be banned because they violate some standard or other substantive requirement. CTP is simply taking advantage of the administrative rules that any products that were not on the market in 2007 (i.e., all e-cigarettes) must receive approval as a new product. Requiring new product approvals is not itself particularly unusual or problematic regulation until you observe that CTP has no rules about what makes a new product approvable. It is not even clear what an application should contain. Any application can, and probably will, be arbitrarily disapproved. This is even worse than the oft-noted fact that the new product application process is prohibitively expensive for anything other than a very promising mass-production product, which >99% of e-cigarette products are not, though that also is a kafkaesque burden.
Indeed, as I and many others have documented at length (example), all CTP approval processes are straight outta Kafka. Decisions are completely arbitrary. Up until now, there have been no actual rules. Yet almost everything that CTP was not statutorily prevented from prohibiting has been prohibited.
The newly proposed rule would impose draconian restrictions on the concentration of N-nitrosonornicotine (NNN; one of the tobacco-specific nitrosamines or TSNAs) allowed in smokeless tobacco products (ST). As with e-cigarettes, this would be a stealth ban of almost the entire product category. There are a few, mostly Swedish-style, products on the market that might meet the standard. But American-style ST products do not meet it, and no one seems to think they possibly could meet it and still maintain their character. The tobacco varieties, curing methods, and fermentation that are inherent aspects of making those styles of ST result in NNN levels exceeding the extremely low standard. This is not like a limit on, say, arsenic concentration, which could be met (at some cost) without changing the fundamental character of the products.
It is worth pausing for a moment to note that CTP’s first proposed real regulation is not about cigarettes, but about the lowest-risk product category in their portfolio. Even if we accept FDA’s fallacious claim about the oral cancer risk (which I will analyze in Part 2), eliminating that risk entirely would have less effect than a rule that made cigarettes 0.1% less harmful. Of course this is consistent with their previous anti-ST actions, like the ban on new flavors for what many believed was the lowest-risk of all ST products (and one of the few that meets the proposed NNN standard, incidentally). This was another example of the kafkaism. It is also consistent with their general anti-THR efforts. It also might be motivated by the absurdly innumerate myth among tobacco controllers — who, of course, comprise CTP and also are the political special interest group it is trying to please — that all cigarettes are exactly equally hazardous. To actually regulate cigarette chemistry would be to admit that this is not, and never has been, true. They are too invested in that myth to admit that. (A widely-held, more cynical view is that the political special interests they are trying to please are tobacco controllers and the cigarette industry, which further increases the incentive to avoid regulating cigarettes.)
The proposed rule would put a ceiling on NNN concentration of 1 ppm of dry weight. Most current ST products sold in the USA fall somewhere between a little more than 1 and 10, with some higher. The vast majority of what is sold is not close enough to 1 that tweaking could get it there.
NNN in ST is widely believed to be a carcinogen. If you have heard the rhetoric, you can be forgiven for not realizing there is actually no evidence of such effect. This is a crucial point I will come back to in Part 3.
Brad Rodu recently noted that there are at least two major procedural irregularities in the issuing of this rule that each seem sufficient to force it to be withdrawn. I believe there is also appears to be a third and there might be a fourth, though I am not going to go into that because that is not my purpose here. This is why I noted that this scientific analysis is largely moot as a comment on the regulation.
The proposed rule was rushed out on the last day of the Obama administration. The Trump administration might or might not have ordered it withdrawn even if the rule withstood procedural scrutiny: As a free-standing de novo regulation, it is easy to get rid of and score a win for “deregulation” (removal of established regulations that the market has already adjusted to, especially those enmeshed in webs of interacting regulations, is often quite costly, despite the rhetoric to the contrary). It is a major threat to the profits of two major American corporations and their wealthy shareholders, and the administration and congressional leaders do not exactly like policies that hurt the rentier class. At least one Swedish corporation is well positioned to swoop in to replace the banned products, which is anathema to what appears to be the cornerstone of the Trump Doctrine in foreign policy: “attack our allies for selling Americans stuff they want.” Also, given the demographics of ST use, withdrawing the rule would be literally the first thing Trump did that actually benefitted the people who are typically described as his base. But regardless of whether the rule would have been proactively withdrawn, if the procedural problems force the first attempt at this rule to be withdrawn, as they seem to demand, the current administration seems unlikely to fix and reissue it.
In the same post, Brad also reported a little-noticed observation in Altria’s comment on the proposed rule, which itself is sufficient to demand it be reissued. It seems that in evaluating the impact of the rule on the market, FDA made a wee arithmetic error. When assessing which products currently on the market meet the 1 ppm standard, they did not convert correctly from actual product weight to the dry weight the 1 ppm applies to. Most products are about half water, so if you take out the water (either literally or on paper) to get the dry weight concentration you double the concentration of everything else. So snuff with 2 ppm of NNN in the can has 2/(0.5)=4 ppm of NNN dry weight. But FDA multiplied by 0.5 (or divided by 2) when they should have divided by it (or multiplied by 2). As a result, they claimed that 30% of products on the market already meet the standard (because they are <2 ppm wet weight) when actually almost none of the products on the market do so (because they are >0.5 ppm wet weight). Oops.
The upshot is that the stated estimated impact — an inherent part of a rule proposal — is badly wrong. Even if the fact that the rule would immediately ban almost all the current market were not sufficient for bouncing it, the fact that FDA inaccurately claimed this was not the case is sufficient.
This error is truly remarkable. It is not shocking someone made an error in a calculation — we all do that (example of me doing that, one of the few cases of an author owning up to an error in a public health paper you will ever see) — though it would be fair to expect and demand that major rule-making would involve enough eyes that even subtle goofs get caught. What is remarkable is the innumeracy. This is not an arcane error whose impact on final results is hard to triangulate with reality.
It reminds me of a long time ago when I taught grade-school-level math to undergraduates who were, well, let’s say struggling. The departmental policy was to aggressively give partial credit on graded assignments. So if an exam question was “50% of 3.5 is…?” and someone divided 3.5 by 0.5 and got 7, and showed his work, he would get partial credit for knowing 50% is 0.5 and for correctly carrying out the wrong arithmetic. On principle, however, I refused to give such partial credit for absurd answers to “word problems”. E.g., if the question was “Amanda is 3 feet tall. She is 50% of her father’s height. How tall is her father?”, I would refuse — at the cost of some annoying fights with my boss — to give any partial credit for answering that he is 1.5 feet tall. Yes, it is the same arithmetic error. But due to the real-world nature of the question, anyone who was actually thinking would know that answer was not a plausible candidate answer.[*] I felt strongly (even back at age 19 when I was teaching those classes[**]) that gaining some numerical intuition was far more important for these hapless students than being able to do the arithmetic algorithms, and that technical errors might warrant partial credit, but abject failures to understand what numbers mean did not.
[*] For the record, if a student demonstrated a clue about numeracy but was just in that class because his brain was not wired to do arithmetic, and jotted a note that effectively said, “I know this answer is not plausible but I don’t know what I did wrong”, I did give the partial credit. I let them know this in advance (without actually saying that bit about “clue about numeracy….not wired…” of course).
[**] Yes, I know what I just did there. So what? :-p
Anyway the relevance of this to the present analysis is that anyone at CTP who was actually thinking seriously about what they were doing, about what the regulation really meant, would not make that arithmetic goof and just run with it. This was not some abstract timed exam question. It is not a complicated calculation with no solid touchstone in reality (as with the example of my error). Anyone appropriately assessing this regulation should have spent weeks pouring over exactly which products met the standard and which did not, as well as studying what engineering changes were possible for others that could result in them making the cut. Even if they were working from the goofed arithmetic to start with, somewhere during this reading and thinking process, someone involved would have caught the error (e.g., when they read something where someone else did the wet weight to dry weight conversion). Similarly, interacting with stakeholders when writing the regulation, rather than just decreeing it from an ivory tower, would have provided the peer review that would have caught this. It seems safe to assume that FDA officials proposed a massive new regulation on a product consumed by millions of people without even attempting to seriously analyze it. It is not of the same magnitude as the failure to do that for the healthcare financing bills, of course, but similar criticisms of the process apply.
This failure does not end with arithmetic. The same problem appears in the science, which I have finished this first post in the series without getting to. Tobacco controllers do not relate to numbers and research results like scientists or generally numerate people do. They relate to them like struggling undergraduates, treating them as magic incantations that just have to be done according to some recipe (which they often do not get right). Also they treat them like the special interest political activists they are, as mere weapons whose goodness or badness is determined not by their accuracy, but by their usefulness for the cause.
With that, I will get to the specific evidence in the next post.
[Acknowledgment: My analysis in this series benefited greatly from discussions with and input from Brad Rodu.]