Tag Archives: Glantz

The travesties that are Glantz, epidemiology modeling, and PubMed Commons

by Carl V Phillips

I was asked to rescue from the memory hole a criticism of a Glantz junk paper from a year ago. I originally covered it in this post, though I do not necessarily recommend going back to read it (it is definitely one of my less elegant posts).

I analyzed this paper from Dutra and Glantz, which claimed to assess the effect of e-cigarette availability on youth smoking. What they did would be a cute first-semester stats homework exercise, but is beyond stupid to present it as informative. It is simple to summarize:

Dutra and Glantz took NYTS data for smoking rates among American minors. They fit a linear trend to the decline in the minor smoking prevalence between 2004 and 2009 (the latter being the start of the e-cigarette era, by their assessment; the former presumably being picked from among all candidate years based on which produced the most preferred result, as per the standard Glantz protocol). They then let the slope of the trend change at 2009 and observed that the fit slope was about the same for 2009 to 2014. From this paltry concoction, they concluded e-cigarettes were not contributing to there being less smoking. They then — again, standard protocol — told the press that this shows that e-cigarettes are causing more smoking.

My previous post includes my somewhat rambling analysis of why this is all wrong. After writing that, I accepted a request to write a tight version of to post to PubMed Commons (presumably that was from Clive Bates, who is a big PMC fan, though I do not recall exactly). What follows, between the section break lines, is what I posted. It relies on more technical background on the part of the reader than I usually assume here, so I have added a few new notes (in brackets) to explain a few points.

Credit to Zvi Herzig for saving a copy of it before it was deleted (more on that story below), else I would no longer have a good copy. I am a year late in properly crediting Zvi for that favor by posting this.

It is dismaying about the state of scientific thinking in this area that commentators on this paper have failed to recognize its glaring fatal flaw. This includes the journal editors and reviewers, but also the many critics of this paper, including the previous comment here and various blog and social media posts (example)

The paper’s analysis and conclusions are based entirely on the modeling assumption that U.S. teenage smoking prevalence would have declined linearly over the period 2004-2014, but for the introduction of e-cigarettes in the population. This assumption is made stronger still [i.e., more constraining and thus more speculative] by the additional assumptions that e-cigarettes introduced an inflection, with linear declines both before and after that introduction [i.e., that it, it consists of a line segments for each of the two periods, forced to meet at a corner, like a drinking straw with a kink folded into it], and that an estimate of whether their slopes are significantly different is a measure of the effect of e-cigarettes on smoking prevalence [i.e., the claim that because the bend in the straw is only slight, e-cigarette availability had no effect]. There is so much wrong with these assumptions that it is difficult to know where to start.

Probably the best place to start is the observation that the authors did not even attempt to justify or defend this body of assumptions. Given that the analysis is wholly dependent on them, this itself is a fatal flaw with the paper even if a reader could guess what that justification would have been. But it very difficult to even guess. There are very few eleven-year periods in the historical NYTS data where smoking rate trends are monotonic (even ignoring the noise from individual years’ measurements), let alone linear. [Note: you can see a graph of the wildly varying historical data in my original post, and it is immediately obvious that it is absurd it is to try to fit a line to it.]

While teenage smoking prevalence is not nearly as unstable as many other of their consumption choices, no choice in this population can be assumed to be in the quasi-equilibrium state of, say, average adult beef consumption, where the shape of a curve fit to the gradual changes is largely immaterial. Unlike a stable adult population, the teenage population is characterized by both substantial cohort replacement between data waves [i.e., it is not the same people — the older kids from one year are no longer kids two years later, but are replaced with a new group of young kids] and fashion trends (rapid changes in collective preferences). Unlike many consumption choices, smoking is characterized by strong social pressures that sometimes rapidly change preferences. Indeed, the authors of this paper are proponents of the belief that marketing and policy interventions substantially change teenage smoking uptake. This makes their modeling assumption that the only impactful change over the course of eleven years was the introduction of e-cigarettes patently disingenuous.

In addition to the shape of the trend line, there are numerous candidates for modeling the impact of e-cigarettes, such as a one-off change in the intercept of the fit line rather than the slope [i.e., as if the straw were snipped apart and the bit after 2009 was allowed to shift up or down, but had to keep the same slope], or making it a function of e-cigarette usage or trialing prevalence [i.e., instead of forcing a linear fit on the latter period which effectively assumes that any effect increases proportional to the mere passage of time, use a curve that models the impact as proportional (or some other function) to actual e-cigarette exposure]. It is telling that the results from none of these alternative models, which are at least as plausible as the one presented, are reported. This either means that the authors ran such models but did not like the results and so suppressed them, or that the authors never even bothered to test the sensitivity of their model.

Put simply, the analysis on this paper depends entirely on a set of assumptions that are never supported and, indeed, clearly unsupportable. While a convenient but clearly inaccurate modeling simplification may sometimes be justified for purposes of making a minor point, it is obviously a fatal flaw when it is the entire basis of a paper’s analysis and conclusions.

Despite this, critics of this paper have almost universally endorsed the authors’ unjustified assumptions rather than pointing out they are fatal flaws. In particular, they have focused on quibbling about the choice of inflection point. The authors chose 2009 as the zero-point for the introduction of e-cigarettes, while the critics have consistently argued that the next data wave (2011), when there was first measurable e-cigarette use, should have been used. They point out that the model then shows a substantially steeper linear decline for the latter half of the period, and suggest this is evidence that e-cigarettes accelerated the decline in smoking contrary to the original authors’ conclusions [i.e., if you use the original method but bend the straw at 2011 instead of 2009, the second half gets steeper after the inflection rather than staying about the same].

If the original model were defensible, one could indeed debate which of these parameterizations was more defensible (the advantage here seems to go to the original authors; generally one would choose a zero-point of the last year of approximately zero exposure, not the first year of substantially nonzero exposure). But the scientific approach is not to dispute parameterization specifics based on one’s political beliefs about e-cigarettes. It is to observe that if a genuinely debatable choice of parameters produces profoundly different model outputs, then the model cannot be a legitimate basis for drawing worldly conclusions. It is far too unstable. In other words, the critics offer an additional clear reason for dismissing the entire analysis, but rather than arguing this should be done, they endorse the core model.

I have written, using this paper as context, more about the tendency of some commentators to get tricked into endorsing underlying erroneous claims while ostensibly criticizing here.

As a final observation, a serious analyst doing a quick-and-dirty fit to a percentage prevalence trend this steep would not choose a line, which predicts there will soon be a departure from the possible range [i.e., a downward linear trend will cross into negative values, which is not possible for a population prevalence]. The standard choices would be a logistic curve or exponential decay. However, it is likely that non-scientist readers (the predominant audience of papers on this topic) will not recognize this. Such readers frequently see fit lines overlaid on time series graphs, and may not recognize that these are just conveniences to help reduce the distractions from the noise in the data, not testaments that every time series can be assumed to be linear. Had the authors chosen a more appropriate shape for their fit line, it would have called more readers’ attention to the fact that they were making unjustified assumptions.

The underlying issue here is much more important than one stupid paper. The reason that Dutra and Glantz could so easily get away with this is that epidemiology (and other associated social science, for those not inclined to call this epidemiology per se) is rife with strong modeling assumptions, which are seldom justified and often patently absurd. The reason several others wrote criticisms of the paper without ever identifying the fatal flaw (and, indeed, implicitly endorsing it) is that the problem is so pervasive that they do not even recognize it as a problem.

These modeling assumptions are sometimes so strong, as in the present case, that the analysis and results are purely artifacts of the assumptions. More often, they affect reported quantitative estimates in unknown ways, though almost certainly moving the result in the direction the authors would prefer. This problem is close to ubiquitous in epidemiology articles. I have written about this extensively, and I will summarize some of it in a Daily Vaper science lesson — riffing off of this material — shortly. So I will stop here.

Finally, there is the issue of PubMed Commons. For those not familiar, it is basically a third-party comments section which PubMed (which indexes the articles from most health science journals) attaches to the bottom of its page for each article. Anyone who has authored an article that is indexed by PubMed can comment there. For example, here is where the above comment used to appear. Yes, used to.

The story: After being asked by Clive (I am pretty sure) to write that, I posted it and shared the link. Zvi quickly wrote me a note suggesting I edit it. In the original version, the last paragraph had some bits about how the authors’ political biases obviously influenced their choice of which among the multitude of candidate models to use. Zvi pointed out that PMC has a rule against “speculation on the motives of authors”, and that they will remove a comment over that. This is a fair enough rule, though it is not actually enforced (every comment from tobacco controllers I recall ever seeing there was just attacking the authors for supposed conflict of interests due to corporate funding, which is speculation about motives and, moreover, an aggressive accusation that someone will lie about results in exchange for funding). As anyone familiar with Glantz et al. will know, my statement was hardly mere speculation. But it was not important, so I was ok with editing the comment to remove those bits.

A few days later, presumably after Glantz complained and pressured PMC about the comment, it was deleted. (There is no chance that one of the other commentators on the Glantz paper, who I also criticized, was the one who complained. They are decent and honest people who would have replied to the comment if they thought I got something wrong, rather than insecure propagandists who know they are wrong and so try to censor disagreement.)

I later got an official notice from PMC that I had violated some unspecified rule and would have to change that if I wanted the comment to appear. I replied to ask what in my comment had violated what rule, because I genuinely had no idea. That kinda seems like a reasonable question. But I got no reply. I followed up. I suggested that perhaps they were basing their assessment on the original version, that did violate their rule, and had missed the edit. I got no response to any query.

Now if I had speculated that Glantz was a sexual harasser who stole credit for his advisees’ work, I could understand why they would have objected. Apparently I would have been right, but it would not have been relevant to the paper. But the content of that comment was fully on-point. Even someone who could not understand why it was entirely valid could at least recognize that it was not some flight-of-fancy.

Clive really wanted some version of the comment to appear. He went so far as to take the time to do a rewrite that he guessed would pass the censors, though it is impossible to know since they refused to explain what bit of it they wanted changed. I could have posted that. Or I could have tried to just repost the above version again, based on my speculation that they were working from the original version that did violate their rule.

But you know what? Fuck that.

On top of everything else, the single message I got from PMC contained a rather rude threat to pull my credentials to comment there if I again violated those vague rules they refused to clarify. That would create a rather difficult situation for someone who wanted to continue to give them free content. Fortunately for me, I am not inclined to gift them any more.

PMC is a ghost town. There are very few comments, and their content is usually little more than a tweet’s worth. Based on what I have seen, the comments are of low quality on average, and are about equally likely to criticize something that is right or inconsequential as they are to identify obvious major flaws. Thus no one ever thinks, “I should check PMC for comments on this paper I am reading.”

PMC needs people to give them free content, and a lot more of it, before it has value. They need force authors to reply to comments, not to comply with takedown demands. So if they are going to blithely delete good content (I really doubt that they get, on any of their million pages, even one analysis per day of this quality), I am not really inclined to help them out. It is not like I or my loyal readers get any benefit from it appearing there, rather than just here or in The Daily Vaper.

During the course of my career, I have thought a lot about how to try to fix some of the problem of health science journals publishing so much obvious junk. Something like PMC, with its relatively large budget and the visibility that PubMed brings, is an approach with some potential. But if the plan is to crowdsource voluntary comments, then it needs to be free and open. Being rude and threatening to your voluntary contributors is definitely contraindicated.

A small operation that picks and chooses a very few targets, like a journal that encourages critical analysis or like Retraction Watch, can operate with tight command-and-control (if those in command are skilled and honest). But that cannot work for a system like PMC.

Trying to overlay command-and-control upon a crowdsourced system is hopeless. It is basically like Soviet planning, where someone tries to control a system (the market; the scientific debate) that is far too complicated to handle from the top down. But that is apparently what PMC is trying to do. They do not have the capacity to assess their content or, apparently, even to respond to requests for clarification. Instead, they just delete something without explanation if an author complains. Needless to say, this is an epic fail.

Moreover, given this arbitrary Soviet system, even someone who really really wanted to post to PMC would be a fool to write an analysis as detailed as mine, let alone the more involved analysis that would be required for a paper that is not so patently stupid. At least if you only write a tweet’s worth of content, you have not lost much if they delete it, and you stand some chance of guessing what to change if they make a vague demand that it be changed.

In summary: The way epidemiology models are created and accepted is a disaster, and the vast majority of the literature is suspect. This also specifically makes it very easy for intentional liars like Glantz and Dutra to concoct models to support their political positions. Not only do the academic and journal community do nothing to resist this, but they actively work to resist any effort to respond to it, out of fear of admitting just how much rot there is. PMC is just one example of the many projects that could theoretically be part of the solution, but that actually remains part of the problem, resisting real scientific criticism.

Oh, and also, we rely on this literature to make behavioral, medical, and public policy decisions. Have a nice day.


The academic scandal hiding within the Stanton Glantz sexual harassment scandal

by Carl V Phillips

By now you have probably heard about the lawsuit against Glantz by his former postdoc, Eunice Neeley. Buzzfeed broke the story here, which (like other reports) appears to be based entirely on the complaint filed in a California court (available here). There appear to be no public statements, other than the blanket denial that Glantz posted to his university blog, which was picked up in at least one press report.

I am fascinated by several details that were too subtle for the newspaper reporters.

First, Glantz’s post in itself — using a university resource — may be yet another transgression. He declares, “I… deny every claim reported to be included in this lawsuit.” He notes that Neeley made a complaint to the university earlier this year and that someone else (presumably another junior researcher who was a Glantz protege[*] or employee) filed another complaint. He describes some details of the university investigation and repeats his denial. He does so just before — get this! — he writes, “Under University of California policy I am not supposed to discuss this investigation until it has been concluded and I have and will continue to respect that policy.” We know Glantz is not the sharpest tool, but you might think he would notice that the previous sentences in the same short document were him discussing this investigation before it concluded.

[*Dear reporters: The counterpart word for “mentor” is “protege”, “advisee”, or “student”. There is no such word as “mentee”, except perhaps as a misspelling of a flavor descriptor. That construction would only work if “mentor” meant “someone who ments”. Seriously, weren’t most of you English majors?]

Also the court complaint says (at 26) that Glantz had already talked publicly about the complaint against him and “boasted” about his defense. So we have a rather undisciplined defendant here. Not a big surprise.

Caveat: I am obviously just working from the complaint document, which might contain inaccurate assertions, though it also might fail to mention damning information that will come out later. Feel free to sprinkle these through as necessary: alleged, alleged, alleged, alleged, if true, if true, if true, claimed, claimed, reported.

First, the headline complaints themselves, those about sexual harassment, were not on the level of recent scandals that included coerced sex acts, aggressive unwanted propositions and …um… displays, and criminal battery (I assume few would suggest the unwanted hugs by Glantz, which were not described being of an overtly sexual nature, are an example of battery). The sexual harassment complaints against Glantz are a pattern of boorish leering, a few instances of gratuitous and discomfiting sexual stories or analogies, the hugs, and some vague hearsay claims. That is basically it.

That is not to dismiss these, of course, let alone say they were acceptable. There is no queue, such that complaints of less heinous harassment need to wait their turn. The sooner we have lots of muscular #MeToo actions like this, the better for our society. The acts Glantz is accused of might still violate laws, university rules, or grant funders’ rules (I offer no opinion on whether they do). It is obvious why someone would dislike this experience and want to escape from it. Any decent employer, upon hearing of this, would order the supervisor to stop doing such things and respect the junior employee’s request for reassignment.

Still, the claims are about the vulgarity, ickiness, and insensitivity that women endure as a result of merely being in the same room as creeps, not assault or exploitation that was facilitated by the abuse of power. It is difficult to imagine these allegations (by themselves) taking out a senior professor. I have witnessed academic shops where bawdy talk by everyone was the norm and no one seemed to mind (though perhaps it is relevant that these were headed by women; a sensible male professor will stay out of any such conversations even if they are common banter among his advisees). If Glantz were sensible, he would have made this all go away with an apology and an “I did not realize… but from now on….” (Spoiler: Glantz is not sensible.)

The racism (if that is even the right word) complaints are weak tea. The lawyer writing the complaint made a big deal about racism issues (Neeley is black), for obvious reasons. But the specifics offered are merely: one story of what might have been (though it seems quite possible it was not) a stupid “you are black, so I just assumed you…”-type suggestion; an unsubstantiated assertion that papers by Neeley and another nonwhite postdoc were subject to more review and editing than their peers (which is pretty much impossible to measure, let alone to rule out a defense that their papers simply needed more editing); and a statement that seems to be Glantz telling another nonwhite researcher in the shop she was a diversity hire.

This is not to say that any of the above is ok. What Glantz said and did was stupid. Really really stupid, and rude and gross. It would be stupid in any setting, but an academic advisor has higher standards. If Glantz really did these things (and it is really difficult to fathom why someone would make these particular details), but somehow did not understand the harm they were causing, he should have — upon receiving the first complaint about them — tripped over himself apologizing to Neeley and the others, promising to stop but still offering to make it easy for her and others to transfer to other advisors, and then staying out of their lives if that is what they seemed to want, and issuing a solemn statement of concern and promises to everyone in his shop.

If he did not really commit the alleged acts, he should still have apologized for whatever disaster did inspire the allegations, and still helped Neeley find another mentor if that was what she wanted. If he demonstrated he is not decent enough to do one of these, the university should have pushed him to. Note that it is possible that some of this happened — the complaint does not deny there were such attempts to fix the problem, and it is not as if the plaintiff’s attorney is going to volunteer that there were.

Regardless of the seriousness of the original behavior and any hypothetical attempts to fix the problem, what happened next — much more than the salacious allegations from the headlines — is why Glantz must be forced to resign, and the university should be on the hook for a payout.

The complaint chronicles how Glantz, after he was no longer Neeley’s supervisor, demanded that he still have an authorship role (both credit and contribution) in the paper she had written under him. Indeed, what he was demanding was lead-author control over it. This was a whole additional level of stupid, as well as making the hypothetical scenario (that he or the university sincerely tried to fix this) seem extremely unlikely. This was not some landmark paper that represented a scientific innovation by Glantz that he would understandably not want to give away. It appears to be just another iteration of the silly conspiracy stories about the tobacco industry, the amateurish cherrypicking historiography, that Glantz and his minions have written at least a dozen times before. This perfectly demonstrates his clueless arrogance (I suspect that narcissism would be the technical diagnosis) when he did not cut his losses and just give the worthless paper away. No. One. Would. Care.

What is worse, he demanded to be a coauthor on her future papers. This is where the story in the complaint gets more subtly interesting. He was either demanding that he keep working closely with her and have major influence over her, despite what had happened, or he was basically insisting she commit academic fraud on his behalf: to make him an author of papers he did not substantially contribute to.

And apparently not content to keep the academic fraud vague and deniable, Glantz marched on.

The complaint says he reneged on a promise to let Neeley be the corresponding author for their paper, demanding that role for himself. Even if there had been no such promise, and even setting aside that he was no longer her advisor and was a fool to not be contrite and conceding, it was properly up to her. She wrote the paper, and a postdoc is plenty senior to make the decision and take whatever role she wants. In addition, Glantz insisted on adding one of the in-house commentators to the paper as an author for what the complaint implies (though is very sloppy about actually saying) are mere reviewer comments, not worthy of authorship.

[Aside: Fake authorship is a major problem in public health publishing. Because the actual value (or total lack thereof) of someone’s papers is pretty much ignored, only numbers count. An unearned authorship still adds to the numbers. I have known researchers who struck a deal to include each other as authors on every paper, despite a complete lack of involvement. I quite enjoy asking anyone who has “authored” hundreds of papers — often more than ten per year — if they have even read them all. (They have not.)]

Apparently the university backed Glantz’s insistence that Neeley still had to work with him on the first paper and yield to his decisions about it. Stupid, stupid, stupid. The complaint spins this as a method to manufacture excuses to deny Neeley authorship of the paper, given that she was strongly averse to having any dealings with Glantz anymore. And, sure enough, a bit later Glantz went ahead and submitted the paper under his name, without telling Neeley or including her as an author. The complaint also says that Neeley heard that Glantz was planning to steal another paper she had written or was writing.

This (assuming it is true) is bright-line academic fraud. It is the worst kind of plagiarism and an unforgivable violation of his duties. Whatever one thinks of the seriousness of the headline rude and gross behavior, and whatever one thinks of an academic not endeavoring to make amends for hurting his protege even if accidentally, there is no getting out of this. The university has to force him to retire (or better still, overtly dismiss him), and his funders have to pull their funding. Importantly, unlike most of the acts alleged in the complaint, Glantz’s submission of Neeley’s paper without her permission or name is easy to demonstrate using evidence beyond mere eyewitness testimony.

This action also reinforces the claims in the lawsuit that Glantz retaliated and the university abetted it. The cliche that the coverup is worse than the original crime is usually bullshit. But disturbingly often in sexual harassment cases, the abuse of accusers after they speak up — to cover up or just to punish them for standing up for themselves — seems to cause more harm than the original acts (not to imply the original acts are not harmful, obviously, nor to deny that sometimes they are extremely harmful).

Despite this, the #MeToo movement should hesitate to embrace Neeley as a poster victim. Boorishness and academic technicalities are not exactly the most tragic abuse stories we have heard recently. In addition, there is an intriguing cause of action stated in the lawsuit (at 71), about unjust enrichment. No details are explained, so maybe this is just a meaningless copy-paste of the lawyer’s standard boilerplate. But if not, the only apparent enrichment in sight is the $20 million that FDA gave Glantz to write stupid things about tobacco. Or, apparently, to hire people to write stupid things about tobacco that he puts his name on. If any of his actions were a violation of the terms of that contract (I have no idea) and if Neeley can get official federal whistleblower status (again, no idea) then she and her lawyer could collect millions if the feds sue University of California to pay back the grant money. It is better they get rich than the money be used to produce junk science, of course, but we are still not talking about selfless nobility.

[Update/Correction: The above paragraph incorrectly implies that the FDA grant is the only substantial funding Glantz has or had that is at risk for clawback or that might be subject to whistleblower awards. There are other big paychecks that also might. See the comments for more details.]

Or it could be quite the opposite, that Neeley is willing to expend her own resources just to retain control of her paper (as is demanded, as a matter of immediate relief, in the complaint), and all the demands for monetary compensation are just the lawyer looking to eke out his fee. We do not know at this point. However, we do know that there are no good guys in this story.

Neeley was, by choice, a Glantz acolyte and thus is presumably seeking a career producing junk science and riding the tobacco control gravy train. Everyone else in the story — the unnamed other victims and the senior researchers in Glantz’s shop who Neeley sought help from — were also in the business of producing anti-science that hurts a lot of people. Moreover, if(!) Glantz’s actions really were as long-standing and frequently-discussed as implied in the complaint, then the senior people are guilty of not speaking up about it to protect their proteges, and Neeley deserves moral credit as a whistleblower.

There is ultimately something very Praljak-like about this (the war criminal who recently poisoned himself, getting the last word, after being convicted in The Hague). If Glantz goes down for this — and it is hard to see how he does not unless there are transparent fabrications in the complaint — it lets him off way too easily. It will be because he was a clueless oaf, and did not know when to cut his losses. It will let him quietly skate away from the tremendous damage his junk science has caused.

If he gets deposed, he will just be asked about roving eyes and authorship credit. He will not be asked about how many different data and model combinations he ran, and then hid, before finding the one that created the “Helena miracle” illusion. Nor will he be asked about how many people have explained to him the glaring flaws in his e-cigarette analyses — confusing liquids and solids, ignoring obvious confounding, combining incommensurate results, etc. — and how many times he further repeated the same misinformation anyway.

As with Praljak’s suicide, it is a bit closer to justice than him just getting hit by a bus, but it is not enough. Still whatever puts and end to such a career — whether harassment and plagiarism charges, a human rights trial, suicide, or a bus — stops him from causing more harm, and that is what matters most.

NYT calls Trump a liar; critics fail to make it so clear about Glantz

[Update: For those who want more details of the criticism of the Dutra-Glantz paper, or are only interested in that and not the broader question of how to combat lies, I have posted a PubMed Commons comment here.]

Further on the critically important theme of my previous post, we are perhaps already starting to see a positive trend. The New York Times went as far as to identify one of Trump’s lies with the word “lie” in its top headline today. They did not go quite so far as to label him a “liar”, understandably, but that is implicit. Readers of this blog will recall my arguments for the importance of calling out liars as such. Piecemeal responses to each individual lie are a hopeless tactic and not effective. For one thing, you end up with this problem: Continue reading

Feynman vs. Public Health (Rodu vs. Glantz)

by Carl V Phillips

I started rereading Richard Feynman’s corpus on how to think about and do science. Actually I started by listening to an audiobook of one of his collected works because I had to clear my palate, as it were, after listening to a lecture series from one of those famous self-styled “skeptic” “debunkers”. I tried to force myself to finish it, but could not. For the most part, those pop science “explainer” guys merely replace some of the errors they are criticizing with other errors, and actually repeat many of the exact same errors. The only reason they make a better case than those they choose to criticize is that the latter are so absurd (at least in the strawman versions the “skeptics” concoct) that it is hard to fail.

Feynman made every legitimate point these people make, with far more precision and depth. Continue reading

An old letter to the editor about Glantz’s ad hominems

by Carl V Phillips

I am going through some of my old files of unpublished (or, more often, only obscurely published) material, and though I would post some of it. While I suspect you will find this a poor substitute for my usual posts, I hope there is some interest (and implicit lessons for those who think any of this is new), and posting a few of these will keep this blog going for a few weeks.

This one, from 2009, was written as a letter to the editor (rejected by the journal — surprise!) by my team at the University of Alberta School of Public Health. It was about this rant, “Tobacco Industry Efforts to Undermine Policy-Relevant Research” by Stanton Glantz and one of his deluded minions, Anne Landman, published in the American Journal of Public Health (non-paywalled version if for some unfathomable reason you actually want to read it). The authorship of our letter was Catherine M Nissen, Karyn K Heavner, me, and Lisa Cockburn. 

The letter read:


Landman and Glantz’s paper in the January 2009 issue of AJPH is a litany of ad hominem attacks on those who have been critical of Glantz’s work, with no actual defense of that work. This paper seems to be based on the assumption that a researcher’s criticism should be dismissed if it is possible to identify funding that might have motivated the criticism. However, for this to be true it must be that: (1) there is such funding, (2) there is reason to believe the funding motivated the criticism, and (3) the criticism does not stand on its own merit. The authors devote a full 10 pages to (1), but largely ignore the key logical connection, (2). This is critical because if we step back and look at the motives of funders (rather than just using funding as an excuse for ignoring our opponents), we see that researchers tend to get funding from parties that are interested in their research, even if the researcher did not seek funding from that party (Marlow, 2008).

Most important, the authors completely ignore (3). Biased motives (whether related to funding or not) can certainly make us nervous that authors have cited references selectively, or in an epidemiology study have chopped away years of data to exaggerate an estimated association, or have otherwise hidden something. [Note: In case it is not obvious, these are subtle references to Glantz’s own methods.] But a transparent valid critique is obviously not impeached by claims of bias. The article’s only defense against the allegation that Glantz’s reporting “was uncritical, unsupportable and unbalanced” is to point to supposed “conflicts of interest” of the critics. If Glantz had an argument for why his estimates are superior to the many competing estimates or why the critiques were wrong, this would seem a convenient forum for this defense, but no such argument appears. Rather, throughout this paper it seems the reader is expected to assume that Glantz’s research is infallible, and that any critiques are unfounded. This is never the case with any research conducted, and surely the authors must be aware that any published work is open to criticism.

Indeed, presumably there are those who disagree with Glantz’s estimates who conform to his personal opinions about who a researcher should be taking funding from, and yet we see no response to them. For example, even official statistics that accept the orthodoxy about second hand smoke include a wide range of estimates (e.g., the California Environmental Protection Agency (2005) estimated it causes 22,700-69,600 cardiac deaths per year), and much of the range implies Glantz’s estimates are wrong. But in a classic example of “a-cell epidemiology” [Note: This is a metaphoric reference to the 2×2 table of exposure status vs. disease status; the cell counting individuals with the exposure and the disease is usually labeled “a”.], Glantz has collected exposed cases to report, but tells us nothing of his critics who are not conveniently vulnerable to ad hominem attacks.

It is quite remarkable that given world history, and not least the recent years in the U.S., people seem willing to accept government as unbiased and its claims as infallible. Governments are often guilty of manipulating research (Kempner, 2008). A search of the Computer Retrieval of Information on Scientific Projects database (http://report.nih.gov/crisp/CRISPQuery.aspx) on the National Institute of Health’s website found that one of the aims of the NCI grant that funded Landman and Glantz’s research (specified in their acknowledgement statement) is to “Continue to describe and assess the tobacco industry’s evolving strategies to influence the conduct, interpretation, and dissemination of science and how the industry has used these strategies to oppose tobacco control policies.” Cleary this grant governs not only the topic but also the conclusions of the research, a priori concluding that the tobacco industry continues to manipulate research, and motivating the researcher to write papers that support this. Surely it is difficult to imagine a clearer conflict of interest than, “I took funding that required me to try to reach a particular conclusion.”

The comment “[t]hese efforts can influence the policymaking process by silencing voices critical of tobacco industry interests and discouraging other scientists from doing research that may expose them to industry attacks” is clearly ironic. It seems to describe exactly what the authors are attempting to do to Glantz’s critics, discredit and silence them, to say nothing of Glantz’s concerted campaign to destroy the career of one researcher whose major study produced a result Glantz did not like (Enstrom, 2007; Phillips, 2008). If Glantz were really interested in improving science and public health, rather than defending what he considers to be his personal turf, he would spend his time explaining why his numbers are better. Instead, he spends his time outlining (and then not even responding to) the history of critiques of his work, offering only his personal opinions about the affiliations of his critics in his defense.


1. Landman, A., and Glantz, Stanton A. Tobacco Industry Efforts to Undermine Policy-Relevant Research. American Journal of Public Health. January 2009; 99(1):1-14.

2. Marlow, ML. Honestly, Who Else Would Fund Such Research? Reflections of a Non-Smoking Scholar. Econ Journal Watch. 2008 May; 5(2):240-268.

3. California Environmental Protection Agency. Identification of Environmental Tobacco Smoke as a Toxic Air Contaminant. Executive Summary. June 2005.

4. Kempner, J. The Chilling Effect: How Do Researchers React to Controversy? PLoS Medicine 2008; 5(11):e222.

5. Enstrom, JE. Defending legitimate epidemiologic research: combating Lysenko pseudoscience. Epidemiologic Perspectives & Innovations 2007, 4:11.

6. Phillips, CV. Commentary: Lack of scientific influences on epidemiology. International Journal of Epidemiology. 2008 Feb;37(1):59-64; discussion 65-8.

7. Libin, K. Whither the campus radical? Academic Freedom. National Post. October 1, 2007.


Our conflict of interest statement submitted with this was — as has long been my practice — an actual recounting of our COIs, unlike anything Glantz or anyone in tobacco control would ever write. It read:

The authors have experienced a history of attacks by those, like Glantz, who wish to silence heterodox voices in the area of tobacco research; our attackers have included people inside the academy (particularly the administration of the University of Alberta School of Public Health (National Post, 2007)), though not Glantz or his immediate colleagues as far as we know. The authors are advocates of enlightened policies toward tobacco and nicotine use, and of improving the conduct of epidemiology, which place us in political opposition to Glantz and his colleagues. The authors conduct research on tobacco harm reduction and receive support in the form of a grant to the University of Alberta from U.S. Smokeless Tobacco Company; our research would not be possible if Glantz et al. succeeded in their efforts to intimidate researchers and universities into enforcing their monopoly on funding. Unlike the grant that supported Glantz’s research, our grant places no restrictions on the use of the funds, and certainly does not pre-ordain our conclusions. The grantor is unaware of this letter, and thus had no input or influence on it. Dr. Phillips has consulted for U.S. Smokeless Tobacco Company in the context of product liability litigation and is a member of British American Tobacco’s External Scientific Panel.

Glantz responds to his (other) critics, helping make my point

by Carl V Phillips

Yesterday, I explained what was fundamentally wrong with Stanton Glantz’s new “meta-analysis” paper, beginning with parody and ending with a lament about the approach of his critics who are within public health. Glantz posted a rebuttal to the press release from those critics on his blog, which does a really nice job of helping me make some of my points. I look forward to his attempt to rebut my critique (hahaha — like he would dare), which would undoubtedly help me even more.

Glantz pretty well sums it up with:

The methods and interpretations in our paper follow standard statistical methods for analyzing and interpreting data.

Continue reading

The bright side of new Glantz “meta-analysis”: at least he left aerospace engineering

by Carl V Phillips

Stanton Glantz is at it again, publishing utter drivel. Sorry, that should be taxpayer-funded utter drivel. The journal version is here and his previous version on his blog here. I decided to rewrite the abstract, imagining that Glantz had stayed in the field he apparently trained in, aerospace/mechanical engineering. (For those who do not get the jokes, read on — I explain in the analysis. Clive Bates already explained much of this, but I am distilling it down the most essential problems and trying to explain them so the reasons for them are apparent and this is not just a battle of assertions.) Continue reading

Sunday Science Lesson: Identifying bullshit is usually easy (it just seldom happens in tobacco-land)

by Carl V Phillips

In the previous post, I quoted from Jon Stewart’s farewell monologue in which he alluded to how it is usually relatively easy to identify utterly bullshit claims and call them out. This includes utterly junk science. There are stories of master fraudsters in science, who carefully cook data and convince the world for years they have made game-changing discoveries, only getting caught after too much contrary evidence piles up. For some immediately detectable cases of junk science, it requires a bit of clever expert analysis to detect it. But these cases should not distract from the fact that most junk science is junk on its face. Continue reading