Category Archives: truths

discussions of how to best present the truth

Glantz settles academic fraud and sexual harassment lawsuit

by Carl V Phillips

As my regular readers — or anyone who appreciates some really awesome schadenfreude — will know, two lawsuits were filed by female students against UCSF anti-tobacco nutcase and faux-scientist Stanton Glantz (technically they were postdoctoral fellows, but there is no relevant difference here between a graduate student and a postdoc). They were billed as sexual harassment suits, though the real payloads were about other abuses by Glantz. I covered it at The Daily Vaper, here, here, and here, and did a video interview about what I wrote here.

Mine account for approximately half the newspaper articles written about the story. That pattern does not quite constitute a coverup, but it is pretty close. Compare the number of stories about any similar accusations a professor who is not a pet of the US government and a favorite of institutions with political influence over the mainstream press. (Sexual harassment suits are a topic that, for obvious reasons, you cannot count on the right-wing press to pick up on, even if suits them politically.)

As I noted in my previous coverage, the most striking aspect of the story is that not a single person in tobacco control expressed so much as a lame “troubling if true” comment. At the same time, not one expressed doubt about the allegations and tried to defend Glantz. Presumably they did not actually doubt that the claims were true, but just did not care.

Also as I noted in my previous coverage, the salacious #MeToo allegations in the complaints are not really the biggest allegations. The sexual (and racial) harassment allegations in the lawsuits were pretty weak. They consisted of rude and boorish behavior by Glantz, and do not paint a picture of a sustained environment of this. Even that much is grossly inappropriate for any employer, let alone an academic mentor, of course. But we are not talking “high-ranking Republican official”-level behavior; more like Al Franken behavior. (In keeping with my running observations about the parallels between the GOP/Trump and tobacco control, it is worth noting that Franken’s party forced him to resign over one boorish incident from his past, while tobacco controllers’ ignoring of the Glantz allegations parallels the GOP practice of ignoring credible criminal-level allegations against Trump and other officials.) Of course, it is possible that the lawsuit filings do not fully capture just how pervasive and oppressive the behavior was, though it seems extremely unlikely there were worse specific acts that were not mentioned.

If the plaintiffs could not produce much more about that than the filings included, it seemed really unlikely they could get it to stick. But this is not true for the less salacious allegations that most observers overlook. The first lawsuit, by Eunice Neeley, said that Glantz stole credit for a paper she wrote and even submitted it to a journal under his name. This is extremely serious bright-line academic fraud, and very easy to prove. The second lawsuit included a claim about Glantz defrauding the U.S. government in order to get funding. This was presented less clearly, but seems like it is probably also bright-line and easy to prove. Of course, if the U.S. government cared about Glantz committing fraud, they would not be happy about his “research” in the first place. (Narrator: “They are actually quite delighted with his fraudulent research.”)

It was recently noticed (not actually announced) that UCSF and Glantz settled the Neeley lawsuit a few weeks ago. UCSF has posted the settlement agreement (h/t @jkelovuori). It is not clear whether they posted it because of transparency requirements or if they agreed to do it as part of the settlement. It could be they think it makes them look a lot better than a pending lawsuit, which is arguably true.

The settlement included UCSF paying Neeley (which really means mostly or entirely her attorney) $150,000. This is about what it would have cost the defendants to go to court and win, let alone risk losing. As is usually the case, the settlement document includes the defendants’ denials of all the allegations. The pro forma denial and the limited dollars do mean that the settlement document makes UCSF look pretty good. Neeley’s lawyer may have decided that the case was unwinnable. The part that was bright-line misconduct is something (real) scholars would care deeply about, but a court probably would not. Or perhaps she was just looking to collect her pay and move on, and so convinced Neeley that it was unwinnable. Neeley seems pretty adamant in her feelings (read on), so it does not seem like she lost interest in pursuing it.

The only thing she got from the settlement, other than whatever loose change her lawyer did not keep, was the right to the one paper Glantz stole from her and an agreement that Glantz would not touch it again. There is little doubt she could have gotten that without a lawsuit, given how obvious Glantz’s academic fraud was. Indeed, even that concession came with enough administrative strings attached that she probably could have done better just by relying on the court of academic public opinion. (Interestingly missing is mention of any other work by Neeley, even though her complaint claimed that this was not the only work of hers that Glantz was trying to steal.) One might conclude this was the one thing Neeley wanted, a matter of credit and honor. Some of her past statements support that honorable interpretation. But she has made clear that this is not the case (read on).

It is impossible to feel sorry for Neeley. She is a tobacco controller of the worst sort (read on). She went to work for a known sociopathic fraudster who has no respect for science and scholarship. Even if he has never committed any actionable sexual abuse, it is well-established that he is an abuser. We might be able to say “just a kid, could not recognize that” about Neeley regarding the latter. But she took the job after getting a doctorate (*smirk/snort*) in public health, and so has no excuse for not understanding the former.

Absent from the settlement was a gag clause (you have to spend a lot more than 150K to buy a woman’s silence — just ask any number of right-wing “family values” types). So a few weeks ago, Neeley joined Twitter to post her allegations about Glantz in random responses to tweets by his funders and others. No link, unfortunately, because literally minutes before I was about to publish this she deleted her account. So now I am even more annoyed at her, for making me re-edit.

Her tweets made it clear she was adamantly committed to the belief that Glantz is an ongoing threat to young women, and that she wants to warn and inform about that. If Glantz and UCSF really believed their denials, they would already be threatening to sue her for libel (and, I suppose, that may have been what just happened).

I am not sure whether Neeley uses other platforms. Her tweets never linked to any essay-length statement, which you think she would have written. Her tweets did not link to any other social media. Perhaps it exists — she was amazingly bad at Twitter for a millennial, so maybe it did not occur to her to mention it. I offered her some advice about some easy ways to do better in her mission, but she seems to have ignored that.

I did not mention in that advice that she came across as a raving loon in most of her tweet replies. If that represents her demeanor and level of focus in person (I have no idea whether it does), her attorney definitely would not have wanted to put her on the stand. I am obviously not saying that presenting that way invalidates a #MeToo complaint (as a certain disgusting ilk do), let alone that the mere act of accusing a powerful man of sexual harassment makes someone sound like she is raving (again, as that ilk do). I am specifically saying that most of her tweets were not what anyone would take seriously. They make her allegations seems less plausible. Thus my advice to her.

But it gets worse (or better, depending on how you choose to relate to this story). Greg Conley noticed that one of Neeley’s tweets claimed that a UCSF investigation confirmed her allegations. (That thread is now missing her tweets, of course, but he was responding to one that said what I just paraphrased.) Conley naturally asked if there was a copy of that report available. You might think that someone who was intent on making her case, and interested in being at least a little bit credible, would have an easy answer to that. Presumably it would be “Unfortunately I don’t have a written report and none is public. I am aware of their findings because….” (you have to assume, as bad as she is at this, she would have linked to it if she could have). But no. Here is her (now deleted) reply:

I do not support vaping, Mr. Conley I know who you are. But, I brought this up because I want to protect individuals working with Stan.

I laugh every time I read that (and not about the typo since I average about .5 typos per tweet, though that did bring to mind HRH Conley the First writing “Counterblaste to Tobacco Control”). “I know who you are” is presented as if it is some kind of accusation. Um, someone working in that area better know who he is. She better know who I am too, though I did not get the courtesy of an “I know who you are.”

(*author pauses to laugh some more, and ponders himself writing Counterblaste to Tobacco Control*)

Anyway, Conley responded sensibly and politely that differences about other politics should not interfere with the shared concern about Glantz hurting his students. In another thread, he offered some other advice, pointed out to her that I was the only reporter who had followed the story, and noted that zero tobacco controllers follow her on Twitter because they want to suppress this. As of today, she had 17 followers; 14 were anti-TC people of various stripes and the other 3 were bots. She did not respond to him.

I chimed in with this observation:

“I am terribly worried about other women, but not enough that I will talk to someone who supports harm reduction” is quite the remarkable position. Says a lot about the brainwashing that led to the omertà silence.

It is truly remarkable. Neither of us explicitly pointed out to her the common practice of narrow alliances, or merely taking advantages of resources when you can. The government that paid for her education is responsible for killing a million people in a war of aggression recent enough for her to remember it. Few of us refuse government services as a result. However much someone might quite reasonably despise the cops, they tend to call on them to perform the legitimate part of their job when needed. I heartily welcome Neeley’s exposure of one tiny corner of one individual’s deplorable behavior, even though I think her political views are also deplorable. Someone should probably point this out to her rather than thinking it is so obvious it goes without saying. Tobacco control lives so far up their own asses that they may be the only people who do not get this.

She is obviously adamant and genuine about her concern, but is such a tobacco controller at heart that she will not actually do anything useful. She is effectively silencing herself (and not just be deleting her useless tweets). She plays at getting the word out, but does not actually want to get the word out because it might interfere with tobacco control’s efforts to ruin people’s lives. She somehow thinks that harm reduction supporters getting the word out will benefit harm reduction, but that her getting the same word out (which is really beyond her ability) would not. But how does that even work? (A doctorate in public health does not exactly teach scientific or logical thinking.)

So she attacks the only useful potential allies she has in the world, now that her lawyer took the money and skated.

Oh, and a few last bits of bathos. Her response to me:

I am for harm reduction because I do not want anyone to use any nicotine-containing product. I am well aware of the harms of nicotine and that is why I have advised the FDA to lower the nicotine in all nicotine containing products.

I am pretty sure that, despite being a supposed scholar and researcher in the field, she genuinely does not understand that her described position is roughly the diametric opposite of harm reduction. This is what I meant by “a tobacco controller of the worst sort”.

And another random other tweet by her:

For all you vapers and tobacco companies, I am strongly anti-tobacco and anti-vaping. I am only complaining about @ProfGlantz because of his decades of sexual harassment. So if you are pro-nicotine, I would not follow me because I advise the FDA to protect public health.

I guess she only liked her bot followers.

Since her deleting her account ruined the ending I had written, I will instead go all-in with the bathos and treat you to a couple of pictures Greg Conley sent me while preparing this. Trigger warning: Once you see these, you cannot unsee them. (Yeah, I know, you saw the pictures before you read this paragraph. I did not mean it.)

bHTquwzJWait, where is his right hand?

OAocjo3MYes, that really is a screenshot of his phone wallpaper.

 

 

 

 

.

Public health publishing is fundamentally unserious: evidence from a single measure of area

by Carl V Phillips

Sometimes an error matters because of its effects. Sometime it matters because what it says about its causes.

I was late to this nice piece by Roberto Sussman (a guest post at Brad Rodu’s blog) that takes down a recent silly paper out of University of California about environmental deposition on surfaces resulting from vaping exhalate. They do not actually call it “third-hand vapor”, though they all but do so, explicitly likening it to the myths (which they endorse, of course) about “third-hand smoke”. For the analysis of the science, please read Roberto’s piece, because here I am just focusing on a single gaffe and its implications.

As background, note that this that this came from the supposedly respectable tobacco controllers at UC, including Benowitz and Talbot, not the utter loons in Glantz’s shop. It was published not in some random online journal, but in the supposedly respectable flagship journal of the tobacco control movement, BMJ’s Tobacco Control.

Reading Sussman’s piece, I came across this, which he quoted from the original paper:

After 35 days in the field site, a cotton towel collected 4.571 micrograms of nicotine. If a toddler mouthed on 0.3 m2[squared meters] or about 1 squared feet of cotton fabric from suite #1, they [sic] would be exposed to 81.26 m[micrograms] of nicotine”. 

Sussman’s post is analytic, but it was written as an essay and so I was reading it fairly casually. That is, I was not trying to actively check each bit of the math as I read it, as I would when reading a research report. But even a quick glance across that passage was enough for me to trip up and notice the error. A square meter is about ten square feet, and thus 0.3 m^2  is about 3 square feet. Sussman, who was reading the original paper carefully for purposes of criticizing it, of course also caught this error and noted it in his next paragraph.

In theory this affects the thesis of the paper, which is based on the premise of a toddler sucking out all the nicotine that has accumulated in a towel that has sat untouched in a vape shop for a month. (Yes, believe it or not, that is really the premise of the analysis.) So the error means that the magical vacuuming toddler is given credit for extracting 3 ft^2 worth of accumulation by sucking the heck out of a mere 1 ft^2 of fabric.

However, this is not one of those convenient errors that creates artifactual results that matter First, every bit of this scenario is obvious nonsense, as Sussman explains, and every step grossly exaggerates the real-world exposures. And, second, even with all that, the tripled quantity is still trivial. So it is not like was the common type of “error” from tobacco control research, one done intentionally to get the result the authors want. It merely changes the result from “a silly premise that despite its huge overstatement still only yields a trivial exposure” to “a silly premise that yields an exposure that is three times as high, but is still trivial.” It is obvious that the conclusions of the paper (“environmental hazard” — i.e., landlords should be pressured to not host vape shops) were in no way influenced by the results.

In addition, it is a pretty stupid intentional “error” to make. It is a bright-line error, which appears right there in the text, as if someone had written 2+2=5. The typical tobacco controller “errors” consist of such tricks as conveniently not mentioning that a crucial variable makes the entire result go away (which only very careful readers catch), or fishing for a model that produces the most politically favorable result and pretending it was the only version of the model ever run (which is easy to detect, but impossible to prove).

No, it is clear that this was a mere goof. Someone who is not so good with numbers was thinking “a meter is about three feet, so it must be that a m^2 is about three ft^2”. Oops.

But here’s the thing: Whoever was doing the calculations for the paper made that goof, but more significantly did not catch it on further passes through the material. In other words, no one ever thought carefully about the calculations. Then someone transferred the calculation notes into text of the paper without noticing the error at that point. The other authors of the paper (there were four total) reviewed the calculation and the paper without ever engaging their brains enough to notice the error, and let it go out the door. Or perhaps they never even reviewed the calculations they were signing-off on, and perhaps not even the paper.

Keep in mind that perhaps you, dear reader, might not notice this error on a quick read. Perhaps you did not even know that a m^2 is about 10 ft^2. But anyone who does science, and is burdened with the hassle of dealing with stupid non-SI American units of measure, knows stuff like this intuitively. As I said, I noticed it without even thinking about it, just like you would notice a misspelling even though you are not actively looking for mispelings as you read. Sussman noticed it, and he is a scientist who probably never sees mention of non-SI units in his work, and who lives in a normal country that uses SI units (i.e., “the metric system”) in everyday communication. It is apparent that none of the authors of the paper ever read it as carefully as he did.

The American authors, who need to be literate in translating from American units to scientific units, should have noticed it. It is a safe bet that if prompted, “there is an error in that sentence,” they would figure it out in a few seconds. So the point here is not that they do not know the units or how to do arithmetic, but that they did not pay enough attention to their own calculations to notice the simple error. They never really cared about the calculations, as evidenced by the conclusions that are not actually supported by the results.

They were not the only ones. The reviewers and editor(s) at BMJ Tobacco Control also did not read the paper carefully enough to catch the error. As I have noted at length on this page, journal peer-review in public health is approximately useless. A generalist copy editor would probably have caught it, but presumably BMJ TC does not employ one despite being hugely profitable.

This also means that no one other than the aforementioned seven or eight individuals read the paper carefully. Indeed, it is quite possible that no one else read the paper at all before it appeared in the journal. From the perspective of serious science, is actually the biggest problem in public health research evident here: not circulating a paper for comments before etching it in stone, but rather creating a “peer-reviewed journal article” out of what is effectively a superficially polished first-draft of a scientific analysis. Anyone who actually wants to get something right makes sure a lot of people read it critically before they commit to it.

Many errors in public health articles are a bit complicated, and pretty clearly happen because the authors and reviewers do not know enough science to know they were errors. Many others are pretty clearly intentional on the part of the authors, and signed off on by reviewers because they are incompetent, inattentive, and/or complicit in wanting to disseminate the disinformation. But a stupid error like this illustrates something different: Public health authors and journals are simply not even trying to do legitimate analysis.

All people like better products. Teenagers are people. Therefore….

by Carl V Phillips

So today FDA Commissioner Gottlieb is pumping cigarette company stock prices by threatening to ban flavors in vapor products (or something — not entirely clear), unless the manufacturers magically get teenagers to switch back to smoking instead (or something — not entirely clear). I wanted to address one aspect of this rhetorical game that does not get talked about enough. I doubt there is any serious observer of this space who does not get this, but much of what is said seems to overlook it rather than drilling down to it as it should.

The prohibitionist’s simplest rhetorical game here is to confuse “this product feature is appealing to teenagers” with “this product feature is particularly or uniquely appealing to teenagers.” But there is a deeper game, trying to cement the premise that intentionally lowering product quality is a good thing. This applies not just to interesting flavors of e-liquid, but also everything from attractive packaging to convenient unit quantities. The standard response to the “teenagers like flavors” rhetoric is to counter that adults like them too, and thus they seem to be critical for smoking cessation. Both systematic data and a deluge of testimonials make this point. It is a great point, and those making it are doing a great job.

However, the prohibitionists at FDA and elsewhere are obviously not unaware that adults also like and buy interesting flavors. Similarly, adults and teenagers both like it that e-cigarettes are less than five kilograms and come in colors other than day-glo orange. They like it that they are affordable, that cartridges last for a while, and that the devices do not burn your lips. They like it that there is no regulation that says tobacco products must be smeared with feces before they are packaged. All of these are aspects of product quality. The same features that make a product appealing to people (and thus, the banning of which would make them less appealing to people) make it appealing to teenagers. It turns out that teenagers are very similar to people, and many would argue that they are people. Lower the quality of the product, and fewer teenagers will choose to consume it. Fewer adults too. This works for food, movies, and pens also. There is no magic here.

The magic exists entirely in the rhetoric, in which the prohibitionists trick people into endorsing (or at least not actively pushing back against) their underlying premise: Intentionally lowering product quality is a good thing because it discourages teenage use. Never mind that intentionally lowering people’s welfare is a phenomenally radical action for a government to take, one that ought to be based on a lot of open and honest analysis, not sneaky rhetoric. I find it is a useful clarifying thought to replace whatever quality-lowering regulation is being debated with “mandatory smearing with feces” (assume the feces are sterilized so they are not a health hazard): If it is okay to intentionally lower the product quality by doing X (flavor bans, “plain packs”, punitive taxes, etc.), then it must be okay to mandate feces smears.

Consider the usual scientific response to flavor ban proposals, that there is no evidence that particular flavors or categories are particularly appealing to teenagers. This is accurate; there is no such evidence and no reason to believe it is true. If someone wanted to lower vapor product quality in a way that particularly affected teenagers, perhaps the orange coloration or increased mass options would be the better bet. After all, isn’t the usual claim that teenagers are taking advantage of the products being so subtle that they can hide them from parents and teachers? Adults would not like ugly heavy products, but they could deal with them.

The thing is that FDA et al. are not actually claiming that the flavors are particularly appealing to teenagers, just that they are appealing. This is obviously true (see above observation that teenagers are very much like people). A casual reader might conclude they are claiming that this is a targeted lowering of quality that affects teenagers but not adults. In fact, the serious actors in the space seldom actually claim that, and when they do it seems usually to be a matter of sloppy word choice. They do not actually consider it a problem that a regulation lowers the appeal of a product for everyone (and thus hurts all consumers). To them, this is a feature, not a bug. They want to ruin the products for everyone.

In getting opponents to go along with their fiction that this is not their motive, they win their greatest victory. One of the important skills of a conman like Scott Gottlieb is to get people to adopt his hidden premises without him ever stating them, let alone defending them. When the arguments hinge on “but adults like flavors just as much as teenagers do”, they effectively concede a key prohibitionist premise: If there were a way to intentionally lower product quality, such that it hurt teenage consumers more than adult consumers, then doing it would be fine. Not just fine, but good or even clearly the right thing to do. No doubt there are some vape advocates who accept that, but presumably most are not ready to agree that their e-cigarettes should have to look like traffic cones. But by just fighting the empirical claim (which is not actually even being claimed), they are often implicitly endorsing the normative premise.

Some advocates lead with the message that there are already laws about teenage access and these just need to be enforced. This is good in that it does not endorse the premise that it goes without saying that harming adults for the good of the chiiiildren is  justified (though usually this is not explicitly stated). The problem is that Gottlieb has cleverly turned this on its head, and threatens to hurt adults if they do not somehow better enforce the government’s laws, magically figuring out how to do what the government has never been able to do with cigarettes. Today’s rhetoric was mostly threatening the industry (though it is consumers who would suffer, of course), but he has directed that same demand at vapers themselves. Those who have been tricked into endorsing the underlying premises are cornered by this. They have effectively already conceded that destroying product quality is acceptable if minor bans cannot be enforced.

Advocates need to do a better job of backing a few steps up the prohibitionists’ chain of reasoning, rather than being tricked into conceding so much ground. Every argument should begin with the observation, “this policy is about intentionally harming people (vapers, smokers, other product users).” This should always be pointed out, because in itself that is a radical use of government power that should not pass without comment. It should be followed with a demand for an answer to, “by what right do you harm me/adult consumers/your citizens, even if it is true that this harms others more and harming them is a good thing because it changes their behavior?” Only after making those observations, and trying to never let the audience forget them, is it time to add “discouraging teenage vaping probably encourages teenage smoking”, “there the evidence does not support your implicit claim that teenagers like flavors better than adults do”, and other arguments about the scientific facts.

Let’s try to get our criticisms right, shall we? (More on the recent “vaping causes heart attack” study)

by Carl V Phillips

Sigh. We are supposed to be the honest and scientific ones in the tobacco wars. But we won’t be if we are not, well, scientific. Case in point are the criticisms of the recent paper with Glantz’s name on it that has been erroneously said to suggest that vaping doubles the risk of heart attack.

Incidentally, the meaningless statistic in the paper is a RR of 1.8, which is not double. Also, when the paper was originally written as a student class project (not by science students, mind you, but by medical students), that statistic was 1.4. That was when Glantz heard about it, managed to get the kids to put his name on the paper, and taught them how to better cook their numbers. That “contribution” has him being called the lead author.

The paper is junk science. So are most of the criticisms of it. If only someone with expertise in these methods had written a critique of it that people could look to. Oh, wait, here’s one in The Daily Vaper from February. That was based on a poster version of the paper, but as I noted in the article, “It has not yet appeared in a peer-reviewed journal, but it will, and the peer-review process will do nothing to correct the errors noted here.” I wish I could claim this was an impressive prediction, but it is about the same as predicting in February that the sun will rise in August.

You can go read that if you just want a quick criticism of the paper, and also look at the criticism on this page of some hilarious innumeracy Glantz piled on top of it. In the present post I am mostly criticizing the bad criticisms, though at the end I go into more depth about the flaws in the paper.

About half the critiques I have seen say something along the lines of “it was a cross-sectional study, and therefore it is impossible to know whether the heart attacks occurred before or after someone started vaping.” No. No no no no no. This is ludicrous.

Yes, the data was from a cross-sectional survey (the 2014 and 2016 waves of NHIS, mysteriously skipping 2015). And, yes, we do not know the relative timing (as discussed below). But “therefore it is impossible to know” (or other words along those lines)? Come on. A cross-sectional survey is perfectly capable of measuring the order of past events. Almost every single cross-sectional survey gives us a pretty good measure of, for example, whether someone’s political views were formed before or after the end of the Cold War. Wait! what kind of wizardry is this? How can such a thing be known if we do not have a cohort to follow? Oh, yeah, we ask them their age or what year they were born. Easy peasy.

Almost every statistic you see about average age of first doing something — a measure of the order in which events occurred (e.g., that currently more Americans become smokers after turning 18 than before, but most extant smokers started before they were 18) — is based on cross-sectional surveys that ask retrospective questions. It is perfectly easy to do a survey that asks heart attack victims the order in which events occurred. Indeed, any competent survey designed to investigate the relationship in question would ask current age, age of smoking initiation and quitting, age of vaping initiation and quitting, and age at the time of heart attack(s), ideally drilling down to whether smoking cessation was just before or just after the heart attack if they occurred the same year. We would then know a lot more than the mere order. But NHIS does not do that because, as I noted in the DV article, it is a mile wide and an inch deep. It is good for a lot of things, but useless for investigating this question. It can be used, as it was here, for a cute classroom exercise to show you learned how to run (not understand, but run) the statistical software from class. But only an idiot would think this paltry data was useful for estimating the effect.

(A variation on these “therefore it is impossible” claims is the assertion that because it is a cross-sectional study, it can only show correlation and not causation. I am so sick of debunking that particular bit of epistemic nonsense that I am not even going to bother with it here.)

So, we do not know the order of events. We can be confident that almost all the smokers or former smokers who had heart attacks smoked before that event. We do not know whether subjects quit smoking and/or started vaping before their heart attacks. Given that vaping was a relatively new thing at the time of the surveys, whereas heart attacks were not, it seems likely that most of the heart attacks among vapers occurred before they started vaping. This creates a lot of noise in the data.

A second, and seemingly more common, erroneous criticism of the analysis is that this noise has a predictable direction: “Smokers had heart attacks and then, desperate to quit smoking following that event, switched to vaping, thereby creating the association.” Again, no no no. Heart attacks do cause some smokers to become former smokers, but there is little reason to believe they are much more likely than other former smokers to have switched to vaping. Some people will have heart attacks and quit smoking unaided or using some other method. Indeed, I am pretty sure (not going to look it up, though because it is not crucial) that most living Americans who have ever had a heart attack experienced that event before vaping became a thing. So if they quit smoking as a result of the event, they did not switch to vaping. Also it seems plausible that the focusing event of a heart attack makes unaided quitting more likely than average, as well as making “getting completely clean” more appealing.

Of course, an analysis of whether behavior X causes event Y should not be based on data that includes many Y that occurred before X started. That much is obviously true. NHIS data is not even a little bit useful here, which is the major problem. There is so much noise from the heart attacks that happened before vaping this that the association in the data is utterly meaningless for assessing causation.

But there is no good reason to assume that this noise biases the result in a particular direction. If asked to guess the direction of the bias it creates, a priori, I probably would go in the other direction (less vaping among those who had heart attacks compared to other former smokers). The main reason we have to believe that the overall bias went in a particular direction is that the result shows an association that is not plausibly causal. We know the direction of the net bias. But this is not the same as saying we had an a priori reason to believe this particular bit of noise would create bias in a particular direction. When we see a tracking poll with results that are substantially out of line with previous results, it is reasonable to guess that random sampling error pushed the result in a particular direction. But we only conclude that based on the result; there was not an a priori reason to predict random sampling error would go in a particular direction.

Moreover, we do not have any reason to believe that the net bias was caused by this particular error, because it has a rather more obvious source (see below).

Sometimes we do have an a priori reason to predict the direction of bias caused by similar flaws in the data, as with the previous Glantz paper with an immortal person-time error (explained here, with a link back to my critique of the paper). If the medical students had engaged in a similar abuse of NHIS data to compare the risks of heart attack for current versus former smoking, then the direction of bias would be obvious: Heart attacks cause people to become former smokers, which would make former smoking look worse than it is compared to current smoking. I suspect that people who are making the error of assuming the direction of bias from the “Y before X” noise are invoking some vague intuition of this observation. They then mistranslate it into thinking that former smokers who had a heart attack are more likely to be vapers than other former smokers.

This brings up a serious flaw in the analysis that I did not have space to go into in my DV article: The analysis is not just of former smokers who vape, but includes people who both smoke and vape, as well as the small (though surprisingly large) number of never-smokers who vape. If vaping does cause heart attacks, it would almost certainly do so to a different degree in each of these three groups. For reasons I explored in the previous post, different combinations of behaviors have different effects on the risk of an outcome. Vaping probably is protective against heart attack in current smokers because they smoke less than they would on average. If a smoker vapes in addition to how much she would have smoked anyway, the increased risk from adding vaping to the smoking is almost certainly less than the (hypothesized) increased risk from vaping alone. Whatever it is about vaping that increases the risk (again, hypothetically), the smoking is already doing that. Thus any effect from adding vaping to smoking would be small compared to the effect from vaping compared to not using either product. Most likely the effect on current smokers would be nonexistent or even protective.

Indeed, this is so predictable that if you did a proper study of this topic (using data about heart attacks among vapers, rather than vaping among people who sometime in the past had a heart attack; also with a decent measure of smoking intensity — see below), and your results showed a substantial risk increase from vaping among current smokers, it would be a reason to dismiss whatever result appeared for former smokers. This is especially true if the estimated effect was substantial in comparison to the estimate for former- or never-smokers. If you stopped to think, you would realize that your instrument produced an implausible result, and thus it would be fairly stupid to believe it got everything else right. This is a key part of scientific hypothesis testing. Of course, such real science is not part of the public health research methodology. Nor is stopping to think.

It is a safe bet that the students who did this analysis understand none of that, having never studied how to do science and lacking subject-matter expertise. Glantz and the reviewers and editors of American Journal of Preventive Medicine neither understand nor care about using fatally flawed methods. So the analysis just “controls for” current and former smoking status as a covariate rather than separating out the different smoking groups as it clearly should. This embeds the unstated — and obviously false — assumption that the effect of vaping is the same for current, former, and never smokers. Indeed, because “the same” in this case means the same multiplicative effect, it actually assumes that the effect for current smokers is higher than that for former smokers (because their baseline risk is higher and this larger risk is being multiplied by the same factor).

Though they did not stratify the analysis properly, it is fairly apparent their results fail the hypothesis test. The estimate is driven by the majority of vapers in the sample who are current smokers, so they must have had a substantially greater history of heart attacks.

There is a good a priori reason to expect this upward bias, as I noted in the DV article, but it is not the reason voiced in most of the critiques. It is because historically vapers had smoked longer and more than the average ever-smoker. This is changing as vaping becomes a typical method for quitting smoking, or a normal way to cut down to having just a couple of real cigarettes per day as a treat, rather than a weird desperate attempt to quit smoking after every other method has failed. Eventually the former-smoking vaper population might look just like the average former-smoker population, with lots of people who smoked lightly for a few years and quit at age 25, and so on. But in the data that was used, the vapers undoubtedly smoked more than average and so were more likely to have a heart attack (before or after they started vaping).

Controlling for smoking using only “current, former, never” is never adequate if the exposure of interest is associated with smoking history and smoking causes the outcome, both of which are obviously true here. If there are no such associations then there is no reason to control for smoking, of course. Thus basically any time you see those variables in a model, you can be pretty sure there is some uncontrolled confounding due to unmeasured smoking intensity. In this case, you can be pretty sure that its effect is large and it biases the association upward.

In short, the results are clearly invalid. There are slam-dunk criticisms that make this clear. So let’s try to stick to those rather than offering criticisms that are as bad as the analysis itself. Ok?

Dual use and the arithmetic of combining relative risks

by Carl V Phillips

It was called to my attention that UCSF anti-scientist, Stanton Glantz, recently misinterpreted the implications of one of his junk science conclusions. Just running with the result from the original junk science (which I already debunked) for purposes of this post, Glantz make the amusing claim that because vaping increases heart attack risk by a RR=2 and smoking by a RR=3 (set aside that both these numbers are bullshit) then dual use must have a RR=5. WTAF?

First off, there is no apparent way to get to 5 except by pulling it out of the air. It is apparent that Glantz thinks he was adding the risks: 2+3=5. Except you cannot add risks that way. Every first-semester student knows the formula for adding risks, which is based on the excess risk. Personally I have always thought that having students memorize that as a formula, rather than making sure they inuit it, is a major pedagogic failure. But that aside, they do memorize the formula, which subtracts out the baseline portion of the RR then adds it back, as should be obvious: (RR1 – 1) + (RR2 – 1) + 1. So, the additive RR = 2-1 + 3-1 + 1 = 4. Think about it: If you “added” Glantz’s way then two risks that had RR=1.01 (a 1% increase in risk) would add to 2.02 (more than double). Or two exposures that reduced the risk by 10% (RR=0.9) would add to an increased risk, RR=1.8. Not exactly difficult to understand why this is wrong.

Additivity of risks is a reasonable assumption if the risk pathways from the exposures are very independent. The excess risk of death caused by both doing BASE jumping and smoking is basically just the excess risk of each added together. (A bit less because if one kills you, you are then not at risk of being killed by the other.) If the risks from the two exposures travel down the same causal pathways (or interact in various other ways), however, adding is clearly wrong. If vaping causes a risk (for heart attack in this example, though that does not matter), then smoking almost certainly causes the same risk via the same pathway. There is basically no aspect of the vaping exposure that is not also present with smoking (usually more so, of course). When this is the case, there are various possible interaction effects. One thing that is clear, however, is that simply adding the risks as if they did not interact is wrong.

The typical assumption built into epidemiology statistical models is that the risk multiply. This is not based on evidence this is true, but merely on the fact that it makes the math easier. The default models that most researchers tell their software to run, having little or no idea what is actually happening in the black box, build in this assumption. It is kind of roughly reasonable for some exposures, based on what we know. In the Glantz case, this would result in a claim of RR = 2 x 3 = 6, which is also not the same as 5.

So, for example, if a certain level of smoking causes lung cancer risk with RR=20, and a certain level of radon exposure causes RR=1.5, then if someone has them both, it is not unreasonable to guess that the combined effect causes RR=30. The impact on the body in terms of triggering a cancer and then preventing its growth from being stopped seems like it would work about like that. On the other hand, there are far more examples where the multiplicative assumption is obviously ridiculous. If BASE jumping once a week creates a weekly RR for death of 20, and rock climbing once a week has RR=2, doing each once a week obviously adds, as above, for RR=21, rather than multiplying to 40. (Aside: most causes of heart attack are probably subadditive, less than even this adding of the excess risks, as evidenced by dose-response curves that flatten out, as with smoking.)

But importantly, notice the “each once a week” caveat. That addresses the key error with the stupid “dual use” myths by specifying that the quantity of each activity was unaffected by doing the other. If, on the other hand, someone is an avid BASE jumper, doing it whenever he can get away, and he takes up rock climbing, the net effect is to reduce his risk. The less hazardous activity crowds out some of the more hazardous activity. This, of course, is what dual use of cigarettes and vapor products (or any other low-risk tobacco product) does. This is not complicated. Every commentator who responds to these dual use tropes — and I am not talking epidemiology methodologists, but every last average vaper with any numeracy whatsoever — points this out. Vaping also does not add to the risk of smoking because it almost always replaces some smoking rather than supplementing it. In this case, using Glantz’s fictitious numbers, it would mean the RR from dual use would fall somewhere between 2 and 3. Not added. Not multiplied. Not whatever the hell bungled arithmetic that Glantz did. Between.

As I said, everyone with a clue basically gets this, though it is worth going through the arithmetic to clarify that intuition. It is not clear whether Glantz really does not understand or is pretending he does not — as with Trump, either one is plausible for most of of his lies. Undoubtedly many of his minions and useful idiots actually believe it is right. The “dual use” trope gets traction from the fact that interaction effects from some drug combinations are worse than the risk of either drug alone. Many “overdose” deaths are not actually overdoses (the term that should be used for all drug deaths is “poisonings” to avoid that usually incorrect assumption), but rather accidental mixing of drugs that have synergistic depressant effects, often because a street drug was secretly adulterated with the other drug.

But as already noted, that is obviously not the case with different tobacco products, whose risks (if any) are via the same pathways. Even if total volume of consumption was unaffected by doing the other (as with “each once a week”) the risks would not multiply and would probably not even add. Since that is obviously not true — since in reality, consuming more of one tobacco product means consuming less of others — the suggestion is even more clearly wrong. In fact, using the term “dual use” to describe multiple tobacco products makes no more sense than saying that about someone who smokes sticks that came out of two different packs of Marlboros on the same day.

In the context of tobacco products, the phrase “dual use” is inherently a lie. It intentionally invokes the specter of different drugs (or other exposure combinations) that have synergistic negative effects. That is not remotely plausible in this case. It also intentionally implies additivity of the quantity of exposure (“doing all this, and adding in this other”) when it is actually almost all substitution, as with which pack you pull your cigarette from. To the extent that it increases total consumption of all products, this is a minor effect (a smoker who vapes not only as a partial substitute, but also occasionally when he would not have smoked even if he did not vape). This only matters to someone who does not care about risk, let alone people, and only cares about counting puffs.

There is a long list of words and phrases that when used by “public health” people should make you assume they whatever they are saying is a lie: “tobacco” (when used as if it were a meaningful exposure category), “addictive” (meaningless for drugs with little or know functionality impacts), “chemical” (a meaningful word, but invariably used because it sounds scary), and “carcinogen” (when used as a dichotomous characterization, without reference to the relevant dosage and risk). “Dual use” should be added to this list, in the same general space as “chemical”, another word that is inherently just a simple boring technical descriptor, but that is almost exclusively used to falsely imply negative effects.

A balanced view of ad hominem judgments

by Carl V Phillips

Tap tap tap. Is this thing on?

Welcome back to this blog. As many of you know, The Daily Vaper, where I published most of my good material for a year, has ceased publication (the articles, fortunately, are still archived at my author page at dailycaller.com, and they redirect from the original links if you have used those somewhere). I also recently did a “best of” Twitter thread highlighting some of what I wrote there (and here and elsewhere). There is something simultaneously atavistic and postmodern about watching (now nonexistent) the DV website slip slowly down the list of top guesses for where I might want to go when I open a new browser tab.

I am writing most of my subject-matter analysis under contract these days, with a bit of freelancing for commercial websites. Deep-think tangents will start to reappear here. Like this one. (I thought about doing it as a Twitter thread, but I realized that would never work.) Continue reading

The travesties that are Glantz, epidemiology modeling, and PubMed Commons

by Carl V Phillips

I was asked to rescue from the memory hole a criticism of a Glantz junk paper from a year ago. I originally covered it in this post, though I do not necessarily recommend going back to read it (it is definitely one of my less elegant posts).

I analyzed this paper from Dutra and Glantz, which claimed to assess the effect of e-cigarette availability on youth smoking. What they did would be a cute first-semester stats homework exercise, but is beyond stupid to present it as informative. It is simple to summarize: Continue reading

The academic scandal hiding within the Stanton Glantz sexual harassment scandal

by Carl V Phillips

By now you have probably heard about the lawsuit against Glantz by his former postdoc, Eunice Neeley. Buzzfeed broke the story here, which (like other reports) appears to be based entirely on the complaint filed in a California court (available here). There appear to be no public statements, other than the blanket denial that Glantz posted to his university blog, which was picked up in at least one press report.

I am fascinated by several details that were too subtle for the newspaper reporters. Continue reading