Tag Archives: peer review

Peer review of: Charlotta Pisinger et al. (U Copenhagen public health), A conflict of interest is strongly associated with tobacco industry-favourable results, indicating no harm of e-cigarettes, Preventive Medicine 2018.

by Carl V Phillips

For an overview of this collection and an explanation of the format of this post, please see this brief footnote post.

The paper reviewed here is not available on Sci-Hub at the moment, or anywhere else (I will add a link if someone finds one). The abstract and portal to the paywalled version are here. And, yes, that series of words really is the garbled title the paper was published with.


This is one of the worst tobacco control papers of the year, and that is a high bar. It suffers from multiple layers of fatal flaws. On the upside, most anti-THR junk science is pretty unremarkable — the usual problems of using flawed methodology, erroneously assuming associations represent causation in a particular direction, statistical games, model fishing, etc. Several of those problems are present here too, along with others, thus creating value as a broad tour of the blatant biases and slipshod work that permeate tobacco control, as well as the low quality of what passes for qualitative public health research more generally.

The authors reviewed 94 journal articles (actually just a few sentence of the abstracts) about vapor product chemistry or effects of in vitro exposures (an undifferentiated muddle of incommensurate papers, which is itself a fatal flaw, though totally overshadowed by far larger flaws). They claim to assess the conflict of interest (COI) of the authors of each paper, though they do not. Then they do some statistics and imply that having industry-related COI causes authors to… well, do something bad, I guess, though this neither stated nor supported by the analysis.

Just to get the most absurd flaw in the paper out of the way: In the title, the conclusion statement of the abstract, and elsewhere in the paper, the authors assert that many of the papers they reviewed claimed that vaping is harmless. I am not going to re-review their database to see, but I would be shocked if there were even two papers that said anything that could even be interpreted that way, and I would guess it is actually zero. Even a poor scientist would know enough to not claim that the one bit of chemistry or toxicology they assessed would support a claim of “harmless”. I would be shocked if there was a single paper by tobacco-industry-employed researchers in the entire modern literature that made that mistake, and fairly shocked if any researcher funded by a serious company was so clueless as to do so.

The authors’ own erroneous denial of COI…

Normally reviews in this collection mention the (seemingly inevitable) dishonest denial of COI by tobacco controllers as a brief aside at the end. But for a paper that is supposedly about COI, it is worth headlining. The authors claimed:

Conflicts of interest: CP and AMB state that they have no conflict of interest. NG has accepted several invitations (travel expenses, accommodation and conference fee) from the pharmaceutical industry to take part in international medical conferences in the last five years.

The paper itself makes clear that the authors have a serious anti-industry bias, which is a huge COI for any paper about vaping. Indeed, the content of the paper suggests this reaches the level of being a fatal problem for this particular analysis: it rendered them incapable of doing any legitimate work. But even if it is not that bad, it obviously is not “no conflict of interest.” The first author is an anti-vaping activist; she has written nothing of importance, but we can see from her anti-vaping testimony to the EU in 2013 and this recent commentary that she has long practiced anti-vaping activism, and is willing to make absurd claims in pursuit of that conflicting interest. [Update: Someone has noted in the comments that she has also previously acknowledged doing work for pharma.] The second author, Nina Godtfredsen, has written no apparent anti-vaping commentaries, but has long been involved with institutional tobacco control. The third author could probably get away with the “no COI” claim based on her lack of publications. But for all of them there is still obvious bias evident in the paper, as well as their financial COI described below.

…and the related failure to do their own research correctly.

The authors’ obliviousness to their own COIs, as demonstrated in the above dishonest “disclosure”, create the next fatal flaw in the paper itself, the notion that a large portion of authors had no conflict of interest. The authors assessed only whether the authors of the papers in their dataset had any funding relationship with industry, or with one of (only) three anti-vaping funders (NIH, FDA, and WHO). But, as anyone who actually understands the concept of COI knows, funding is only one of many COIs, and is seldom the most important. Everyone writing in this space has non-financial COIs. Moreover, the types of funding relationships described in papers’ front- and end-matter are only one of many financial COIs, and are often not the most important. For example, a one-off grant to write a paper is a far less important financial COI than working for a university department that, for other projects, gets substantial ongoing funding from an interested agency.

Also, in case you have an inclination to give the authors credit for trying to assess the effects of NIH/FDA/WHO funding (why just those three??), note that they wrote, “Ten out of 63 studies without a conflict of interest were funded by NIH, FDA or WHO” (emphasis added). That is just funny.

This error only became fatal upon publication. If anyone who understood the concept of COI had been involved in editing or reviewing this paper for the journal, or merely been asked to comment on a manuscript, they could have fixed the problem: They would have just told the authors to replace the absurd “no conflict of interest” category and language with “no industry funding” or some other description of what they actually claimed(!) to have coded. But apparently no one with any expertise in the titular subject matter ever read the paper before it appeared in a journal.

That would have saved the authors from broadcasting how little they understand COI quite so loudly. However, it would not have solved the next layer of fatal error, in which they fail to understand even financial COI. Their methodology for assigning COI scores is opaque, and basically could be described as “what the first author thought the coding should be.” There are some words, but they are a muddle. There is no clue exactly what the key word “sponsored” means (let alone “partially sponsored”), and no indication which entities counted for which category (would FSFW count as “the tobacco industry” in their minds? probably, but they do not say). There is a vague explanation for what information was used, and it appears to just be the front- and end-matter of the paper (funding acknowledgments, institutional affiliations), or that in other papers by the same authors for publications that omit that information. Presumably the authors could provide some clarification if asked, so the sloppy reporting of the methods is mostly a testament to how unserious both the authors and the journal process were.

Financial COIs are not homogeneous.

The bigger problem here is that the authors apparently do not understand how different types of funding create different COIs. A one-off grant, actually working for CDC or PMI, and an ongoing center grant are very different. An investigator-initiated project, whatever funding it manages to get, is very different from a funder issuing a highly-specific RFP or contract. An unrestricted pool of research money is different from a funded predefined research program. Figuring out exactly how to score these differently would be tricky, but the authors seem oblivious to the fact that you need to at least try.

Also, funders vary. It is difficult to imagine FDA renewing a center grant if the researchers produced a series of papers that undermined FDA’s political agenda, and tobacco control funders have a history of cutting off funding when they do not like someone’s results. It is equally difficult to imagine a major tobacco company cutting off someone’s funds for publishing an inconvenient result. Thus, external grants from some funders create huge conflicting interests to support an agenda, while other funders — major tobacco companies in particular, who would not dare do so even if they were inclined — are very unlikely to impose pressure for particular results.

Chances are that these authors (and others who think this paper has any value) are unaware of this fact. If they are aware, their political views (i.e., their COIs!) presumably would result in them ignoring it. But even conceding that, it is still a fatal error to just lump all financial relationships into the categories “fully sponsored” research, “partially sponsored”, and other funding relationships. Indeed, the latter category in this ordered ranking is potentially more influential for someone who is inclined to cook her results to please a funder. Which is more enticing, getting your department a few thousand dollars to hire a student RA, or getting an personal honorarium and expenses to jet off to a conference to present the results?

Then there are the various financial COIs that do not appear in the end-matter. As already noted, an operation (e.g., university department) may be very beholden to funding from a particular political faction in the tobacco wars (read: from institutional tobacco control), regardless of the funding for the current paper. It seems safe to assume that these authors know that if they had they had actually written the analysis they pretended to, genuinely assessing the COIs of paper authors (assessing commentaries they had published; looking into their departments’ funding; etc.) and reporting them (e.g., noting that some of the authors are anti-vaping activists), then they would have been off the tobacco control gravy train for the rest of their careers. Even more so if they had assessed the quality of the research rather than just glancing at the abstracts. Grant funding for a paper does not create a financial COI — the money is already pocketed. The grant funding you want for your next paper, or for the rest of your career, does. (Past funding may create a COI in terms of disposition or attitude, but these authors were ignoring all such COIs.)

One subtle problem that is baked into the methodology, hardly worth mentioning given the major problems, is that the authors chose to code papers the same if all the authors had some affiliation or just one of them. Good independent research projects tend to assemble ad hoc groups of authors, with a reasonable chance that at least one is expert enough to have consulted for industry, so they would get coded as if they were industry projects.

The authors make no attempt to assess the quality of the papers they reviewed.

But let’s move on and counterfactually imagine having a useful measure of the COI that potentially affects each paper in some collection. What should someone do then? The most useful task would be to look at the methodologies to see if the study designs (what questions were being asked, what apparatus were used and how, quantities, etc.) seemed to be biased in ways that would advance the apparent conflicting interests. In particular, it would have been interesting to know which papers have been lambasted in public comments for their methodological failures (e.g., the ones in this collection that overheated the coils, thereby producing a cocktail of nasty chemicals that no one would ever vape). But that is not what the authors did. Indeed is apparently way over their heads. They presumably lack the understanding of the relevant science to assess the methods themselves (suggesting that perhaps they had no business undertaking this project without a coauthor who could), and it seems unlikely they are even capable of assessing which third-party criticisms of the methods are valid.

So instead they claim(!) to have looked at the results. That would not be useless, and writing “the authors really should have written this other paper I would rather read, even though it would have been beyond their ken” is never great in a review. But it should be recognized that this is a clearly inferior approach: Results differ from true values of what is purportedly being measured due to a combination of identifiable study design biases, hidden biases that even the study authors might not be able to recognize, and random error.

They did not even really look at the results.

Except they really did not look at the results. They really just looked at the conclusion sentence in the abstract. First, only the abstracts were reviewed. The authors rationalized this lazy approach by claiming it was done not because they only had two days to spend on this, but to represent the “real-life scenario” of most readers only looking at the abstract. That is actually useful to analyze and report, though it is inexcusable that they made no attempt to also assess whether what was reported in the abstract accurately represented the results of the study. Abstracts that misrepresent the results are, of course, a common manifestation of anti-tobacco COIs, and thus a particularly good way to assess whether COI influenced the reporting. Thus, the representation by the authors that they assessed the results of the studies is simply false.

The second and third authors — who appear to be unqualified to answer the question in most cases — coded each abstract for “Do the results indicate potential harm to health?” This is a scientifically illiterate question when asked without quantification; any analytic chemistry result and any toxicology result other than “no effect whatsoever was detected” (which presumably never happened) will contain information about some potential harm to health. Anyone who would ask this question (let alone sometimes answer “no”) is clearly unqualified to do this analysis. Presumably they were really just answering this question based on the stated conclusions, not the results. Thus, their two coding questions really become a one question with an ad hoc element of quantification (i.e., the first is just a less sensitive version of the second).

The second question they coded, which is clearly what they were really interested in (and, sadly, genuinely closer to the typical know-nothing “real-life” method of reading practiced by supporters of tobacco control) was “What are the conclusions? (1. Concern that e-cigarettes might harm users’ health or public health; 2. No concern that e-cigarettes might harm users’ health or public health/recommend them as harm reduction strategies or 3. Unclear).” But as anyone who reads the tobacco control literature knows, such statements are just throw-away editorializing that seldom have anything to do with the actual research results. The only legitimate statement of concern or lack thereof would be “harmful levels of X were [not] detected in this study” or “the cellular effects detected in this study are [not] generally believed to represent real health risks.” It is slightly interesting to parse COIs against the usual editorializing to look for an association (spoiler: tobacco controllers throw in unsupported political editorializing in everything they write; industry researchers stick to the facts). But that, which is all they actually did, was not what these authors pretended to be doing.

The authors repeatedly make a big deal about how these assessments of the abstracts were blinded regarding authorship. This is comical. You would have to be dumb as a rock to not recognize the difference between a just-the-facts abstract written by a careful industry research team and the political screechings that are written by tobacco controllers. Yes, there is some middle ground, but not much.

Moreover, the stated coding for the second question is also scientifically illiterate. Anyone who does a single study of the types analyzed and draws any conclusion that would actively indicate “no concern” about exposure to vaping in general should be condemned for that, and perhaps that should be blamed on COI. Presumably approximately none of the abstracts actually said that. The stated “concern” that is coded as 1 (especially the bit about “public health”, which would require social science analysis, not just chemistry) is clearly just a measure of anti-vaping editorializing. It is possible to make a chemistry or toxicology discovery that raises alarm about some exposure, of course, but it happens that in the case of vaping there have been no such discoveries. Thus any stated “concern” is purely political.

In other words, this coding basically divides the world into authors who were actively editorializing (on the anti side) and those who just reported the facts. Whatever funding history makes someone more likely to fall into the latter category (spoiler: that would be avoiding institutional tobacco control funding) should be commended.

If the other aspects of the methods were not so unserious, it would be important that there is a big difference between where someone looks and what they do, a distinction the authors seem oblivious to. That is, a researcher that intended to concoct a politically favorable result might design the labs they are running to do that, or game how they report the results. Or they might just choose to assess something where they already know they will like the results. For example, an anti-vaping activist might choose to overheat the coil to produce a nasty cocktail, or he might just decide to observe whether there are detectable molecules of diacetyl or nitrosamines — already knowing there are — with the intention of spinning those trivial quantities as harmful. Alternatively, authors might do a study of COIs in which they design their coding to ensure that anti-vaping authors are said to have no COI, or they might just choose to observe whether high-quality industry research tends to produce reassuring results — already knowing it does — with the intention of spinning that as biased. (Or they might do both.) The omission of any attempt to assess how much of each of these is happening is pretty minor, given the other flaws, and is probably another “beyond their ken” point. But someone who was trying to do a serious version of the ostensible research would have done it.

Results (one is actually interesting!), unsupported conclusions, and the bizarrely absent conclusion.

The authors’ interpretation of the results from this train wreck methodology are hardly worth mentioning, let alone their silly use of statistical tests for the resulting trivial crosstabs (“hey, look at us, we is doing real scientifics!”). They dichotomized each of their codings, presumably in the way that produced the most dramatic results, but at this point who even cares about subtle clever ways of biasing the results.

Naturally, these “no conflict of interest” authors (LOL) concluded — after producing what is basically the observation that only non-industry studies tend to include absurd conclusions — with further absurdity: “a strong association between an industry–related conflict of interest and tobacco/e-cigarette industry–favourable results, indicating that e-cigarettes are harmless.” As already noted, presumably no one ever claimed the latter. Moreover, they never explain what they think an industry-favorable result even is, let alone make a case for why it is that. On top of that — and here is the craziest thing about this hot mess — they never actually claim that the COIs that (they assert without analysis) result from industry funding caused anything. Not once, here or anywhere else. It is not just that they do not propose a mechanism via which industry-associated researchers tried to get “industry-favourable [sic]” results (e.g., by only looking at the cleanest vapes or using too-small quantities). They do not ever even assert that those researchers tried to do so.

They literally wrote not a word about why the content of industry-associated abstracts might systematically differ from those of tobacco controllers. (The present review explains why; they did not even attempt to offer an alternative story.) They are so far down their rabbit hole of biased madness that they presumably think it goes without saying that the only possible explanation for the difference is that everything tobacco controllers write is valid and…? And what? Even if we pretend they made that counterfactual claim about tobacco control papers, they still fail to even assert that being associated with industry results in inaccurate study results. They are so blinded by their COIs that they do not even realize they forgot to say it. Of course, anyone who would take this paper seriously presumably lives in the same rabbit hole, so I suppose that does not matter much.

Their lack of ever even asserting that industry researchers do anything wrong, however, does not stop them from continuing with, “Some journals have already decided they will not publish tobacco industry–funded research. The present authors recommend all journals to follow in their footsteps.” Yes, that’s right. They made zero attempt to assess whether those studies used good methodology, or even whether they accurately represented their results. But then called for censoring them all because they did not inappropriately editorialize in these authors’ preferred direction. What more do you need to know about the COI of tobacco controllers?

Serious readers, unless they are trying to write a review, will probably not even bother to look at the reported results. But that would cause them to miss the one interesting result — interesting when assessed based on what was actually done and not what the authors pretend was done: “Analyses showed that there was no difference [sic: 94 and 100 are different] in findings of harm between studies funded by [NIH/FDA/WHO] (10/10, 100%) and studies without [sic: sarcastic] conflict of interest funded by other sources [sic: actual editing error] (48/51, 94.1%; p = 0.831).” The authors presumably do not realize that what they found is that authors they pretend have “no COI” were almost as likely to engage in anti-vaping political spin as those with the strongest financial incentives to do so. In other words, within the world of authors who avoid any industry affiliation (or are too unskilled to be invited to have one), the COIs that are driving their behavior are mostly non-financial. We already knew that, of course, but this is an interesting statistic in support of that.

Introduction and Discussion.

To round out this review: The Introduction is mostly just what you would expect, with random undergraduate-level discussion about vaping in general and various bits of ancient history, along with the usual pot-kettle ranting about COI. There are no references to any serious analyses of the concept of COI, nor a word about what the authors even think COI even is — unsurprisingly, since it is obvious they have no idea. Interestingly, however, there is a paragraph about how “contradictory” results in the literature may be driven by different methodologies. You might think that someone who wrote that would go on to actually look at methodologies, but no. (Also, the fact that they think of different results from different methodologies as “contradictory” is a deep indication of just how unsophisticated these authors are. They are basically telling us that they themselves are only capable of reading at that naive “real-life” level they used — i.e., they only understand asserted conclusions, not the actual science.)

The Discussion section is a remarkable self-own. This is not limited to the “strengths and limitations” paragraphs, though these might as well just have read “not only do we not understand what a legitimate analysis of this would have looked like, but we don’t even understand what we did.” They are utterly unaware of the actual limitations noted above (I would normally suggest that they might be pretending to be unaware, but in this case they probably really are). What they cite as strengths are equally absurd for reasons noted above.

They note how the different coding dimensions got similar results, not recognizing that this is because they were measuring the same phenomena. They complain, “No tobacco industry–related papers expressed concerns about the health effects of e-cigarettes.”, not recognizing that this means that (a) they did not find anything alarming because (as far as we can tell) there is nothing alarming to find if you use valid methods, and (b) unlike the “no COI” researchers, they are proper scientists who do not draw conclusions about outcomes (let alone policies) that were not assessed in their research. They suggest that their results are similar to those from other fields, apparently oblivious to the huge differences among those situations and literatures. And of course they liken their result to the completely dissimilar ancient history when cigarette companies did produce dishonest research. They ramble on about this for a while.

Their second biggest self-own is, “Very concerning is that the tobacco industry papers are cited more often than papers written by independent [sic] researchers.” Hmm, I wonder if there is a reason for that? It also turns out that the New York Review of Books is cited more often than the National Enquirer.

Their greatest self-own, however, is, “Penalties to authors who do not disclose [COI] correctly have been proposed….” Um, I have some bad news for you.

FTFY.

Finally, this is what the abstract of the paper should have said, based on the above analysis:

In the vaping research space, the major corporations can afford the highest-quality personnel and equipment, and pay researchers to focus and take the time to do the work carefully and correctly. They are under enormous pressure to make sure their methods and results are valid and replicable, and to carefully avoid engaging in political editorializing when reporting their study results. By contrast, university and government research generally relies on students or other low-cost researchers. Those authors are usually under serious time pressure and often working on multiple projects, but face basically no pressure to do legitimate work. They feel free to spin the results and editorialize about their personal political opinions, and there are no repercussions when their methods or results are demonstrated to be fatally flawed. In addition, most such researchers are dependent upon (or hope to become dependent upon) grants from anti-vaping agencies, which are likely to not be forthcoming if the papers do not support the political agenda.

We assessed whether authors of papers about vape chemistry engaged in political spin in their abstracts and cross-tabulated that against funding sources. Our research found that papers that did not have the benefits of industry funding were much more likely to make unsupported anti-vaping political statements. Presumably this was mostly driven by them lacking the pressure to report accurately and avoid political spin, though many might have also used flawed or biased methodology (which we did not attempt to assess).

It is impossible to separate political statements that were motivated by financial conflicts of interest from those motivated by personal political conflicts of interest. However, we did observe that among the abstracts that did not have the benefits of industry funding, there was little difference in political spin between those whose authors were known to be under financial pressure to produce anti-vaping results (because they receive funding from U.S. anti-vaping agencies or WHO) and the others. This suggests that the politicizing in those papers is driven primarily by non-financial conflicts of interest, as well as the general sloppiness of university research papers, and that anti-vaping funders are largely just rewarding such bias rather than causing it.

Peer review of: Linda Johnson et al. (Washington U med school), E-cigarette Usage Is Associated with Increased Past 12 Month Quit Attempts and Successful Smoking Cessation in Two U.S. Population-based Surveys, NTR 2018.

by Carl V Phillips

For an overview of this collection and an explanation of the format of this post, please see this brief footnote post.

The paper reviewed here is available at Sci-Hub. The paywalled link is here.

This collection will focus mainly on the misleading anti-THR papers produced by tobacco controllers. However, it is useful and important to provide reviews of potentially important paper that might be called pro-THR. This is one example of a paper that has gotten a lot of “ha, take that!”-toned traction.

If a “pro-THR” paper is tight, a review will provide a substantive endorsement, as positive reviews should do (but as the anonymous and secret — and presumptively poor-quality — journal reviews cannot do), as well as a signal boost. If a paper is useful but importantly flawed (as in the present case), the review can correct or identify the errors and focus attention on the defensible bits. And if the paper is fatally flawed, the review should point that out. Bad advice is still bad advice when it feels like it is “on your side”. Even when a paper basically only provides political ammunition and not advice, it is important to assess its accuracy. We are not tobacco controllers, after all, who just make up whatever claims seem to advance their political cause.

—-

Johnson et al. use historical nationally-representative U.S. tobacco use data (NHIS from 2006 to 2016 and CPS over most of that period), for 25- to 44-year-olds, looking at the rate of smoking quit attempts and the association between vaping status and quit attempts or successful smoking abstinence. The authors report an unconditional increase in the population for both quit attempts (measured as a the rate of past-year incidence among people who smoke) and medium-term smoking abstinence. They also report a positive association between vaping and smoking quit attempts and abstinence at the individual level. They interpret their results as running contrary to the recent spate of “vapers are less likely to quit” claims, stating “These trends are inconsistent with the hypothesis that e-cigarette use is delaying quit attempts and leading to decreased smoking cessation.”

This is an overstatement, but the results do run contrary to the “vaping is keeping smokers from quitting” trope that the authors position their paper as a response to. This research clearly moves our priors a bit in the direction of “yes, vaping encourages people to quit smoking, and helps them do so.” Our priors only move “a bit” because rational beliefs based on all available evidence tell us we should be very confident of that conclusion already. They should instead have said something like “even if you naively believe in those methods, for this data the result is different”, but such (appropriate) epistemic modesty is absent.

The paper is quite frustrating in that the authors seem to not recognize which of their statistics are actually most informative and persuasive, let alone take the deeper dive into specific implications that could have been done. The natural experiment interpretation of some of the results is more compelling than the behavioral-association-based analysis (see below). The authors overstate the value of their association statistics and effectively endorse the same flawed methods that are the source of the “vapers are less likely to quit” literature. Continue reading

Footnote: Paper review posts

This is a prepositioned footnote to explain a series of posts I will be publishing.

I expect to soon be launching a major project that will publish a large number of proper peer-reviews of recent journal articles and some other papers in the THR space. (Fair warning to anyone planning to publish junk in the near future!) So, in order to lay in some material for that, develop protocols, learn-by-doing, and such, I am writing some entries for that collection now. Given that I am doing it, I might as well post them here. To find those posts, look in the comments section below for pingbacks.

[Update: My funding to continue that project was pulled. But you can still find the prototypes via the pingbacks here.]

The publications in this collection will not read like a typical blog essay, though they will be readable and reasonably free-standing, unlike a peer-review for a journal. For those familiar with the latter genre, think of them as a thorough and high-quality journal review — a rarity, I know — with a few hundred words added here and there to make it readable as an essay for someone not intimately familiar with the original paper. (And also with what would have been “the authors should fix this” phrasing changed to be phrased in terms of “the authors made this mistake”, because they also made the mistake of finalizing their paper before seeking the advice they needed to fix it.)

For those not familiar with journal reviews, just know that these pieces will not just address one or a few interesting points, in a narrative style, and not bother with the rest of the paper, as an essay would. They will have those interesting bits, but they will also step through a protocol for addressing each aspect of the paper (e.g., is the literature review in the Introduction legitimate, are the Methods adequately presented, etc.). Some of the bits will probably require reading the original paper to make sense of. For the reviews that I write, I will try to put any interesting narrative bits first, and make those free-standing. This will offer something to casual readers, and if you are not interested in the full review; you can stop reading when you get to the disjoint bits about other aspects of the paper.

That is basically what you need to know to make sense of what you are reading. Once I have the guidelines more developed, I will post a link here if you want to delve deeper. In particular, I will be recruiting freelance contributors to write reviews, so if you are qualified and interested, please take note.

Public health publishing is fundamentally unserious: evidence from a single measure of area

by Carl V Phillips

Sometimes an error matters because of its effects. Sometime it matters because what it says about its causes.

I was late to this nice piece by Roberto Sussman (a guest post at Brad Rodu’s blog) that takes down a recent silly paper out of University of California about environmental deposition on surfaces resulting from vaping exhalate. They do not actually call it “third-hand vapor”, though they all but do so, explicitly likening it to the myths (which they endorse, of course) about “third-hand smoke”. For the analysis of the science, please read Roberto’s piece, because here I am just focusing on a single gaffe and its implications.

As background, note that this that this came from the supposedly respectable tobacco controllers at UC, including Benowitz and Talbot, not the utter loons in Glantz’s shop. It was published not in some random online journal, but in the supposedly respectable flagship journal of the tobacco control movement, BMJ’s Tobacco Control.

Reading Sussman’s piece, I came across this, which he quoted from the original paper:

After 35 days in the field site, a cotton towel collected 4.571 micrograms of nicotine. If a toddler mouthed on 0.3 m2[squared meters] or about 1 squared feet of cotton fabric from suite #1, they [sic] would be exposed to 81.26 m[micrograms] of nicotine”. 

Sussman’s post is analytic, but it was written as an essay and so I was reading it fairly casually. That is, I was not trying to actively check each bit of the math as I read it, as I would when reading a research report. But even a quick glance across that passage was enough for me to trip up and notice the error. A square meter is about ten square feet, and thus 0.3 m^2  is about 3 square feet. Sussman, who was reading the original paper carefully for purposes of criticizing it, of course also caught this error and noted it in his next paragraph.

In theory this affects the thesis of the paper, which is based on the premise of a toddler sucking out all the nicotine that has accumulated in a towel that has sat untouched in a vape shop for a month. (Yes, believe it or not, that is really the premise of the analysis.) So the error means that the magical vacuuming toddler is given credit for extracting 3 ft^2 worth of accumulation by sucking the heck out of a mere 1 ft^2 of fabric.

However, this is not one of those convenient errors that creates artifactual results that matter First, every bit of this scenario is obvious nonsense, as Sussman explains, and every step grossly exaggerates the real-world exposures. And, second, even with all that, the tripled quantity is still trivial. So it is not like was the common type of “error” from tobacco control research, one done intentionally to get the result the authors want. It merely changes the result from “a silly premise that despite its huge overstatement still only yields a trivial exposure” to “a silly premise that yields an exposure that is three times as high, but is still trivial.” It is obvious that the conclusions of the paper (“environmental hazard” — i.e., landlords should be pressured to not host vape shops) were in no way influenced by the results.

In addition, it is a pretty stupid intentional “error” to make. It is a bright-line error, which appears right there in the text, as if someone had written 2+2=5. The typical tobacco controller “errors” consist of such tricks as conveniently not mentioning that a crucial variable makes the entire result go away (which only very careful readers catch), or fishing for a model that produces the most politically favorable result and pretending it was the only version of the model ever run (which is easy to detect, but impossible to prove).

No, it is clear that this was a mere goof. Someone who is not so good with numbers was thinking “a meter is about three feet, so it must be that a m^2 is about three ft^2”. Oops.

But here’s the thing: Whoever was doing the calculations for the paper made that goof, but more significantly did not catch it on further passes through the material. In other words, no one ever thought carefully about the calculations. Then someone transferred the calculation notes into text of the paper without noticing the error at that point. The other authors of the paper (there were four total) reviewed the calculation and the paper without ever engaging their brains enough to notice the error, and let it go out the door. Or perhaps they never even reviewed the calculations they were signing-off on, and perhaps not even the paper.

Keep in mind that perhaps you, dear reader, might not notice this error on a quick read. Perhaps you did not even know that a m^2 is about 10 ft^2. But anyone who does science, and is burdened with the hassle of dealing with stupid non-SI American units of measure, knows stuff like this intuitively. As I said, I noticed it without even thinking about it, just like you would notice a misspelling even though you are not actively looking for mispelings as you read. Sussman noticed it, and he is a scientist who probably never sees mention of non-SI units in his work, and who lives in a normal country that uses SI units (i.e., “the metric system”) in everyday communication. It is apparent that none of the authors of the paper ever read it as carefully as he did.

The American authors, who need to be literate in translating from American units to scientific units, should have noticed it. It is a safe bet that if prompted, “there is an error in that sentence,” they would figure it out in a few seconds. So the point here is not that they do not know the units or how to do arithmetic, but that they did not pay enough attention to their own calculations to notice the simple error. They never really cared about the calculations, as evidenced by the conclusions that are not actually supported by the results.

They were not the only ones. The reviewers and editor(s) at BMJ Tobacco Control also did not read the paper carefully enough to catch the error. As I have noted at length on this page, journal peer-review in public health is approximately useless. A generalist copy editor would probably have caught it, but presumably BMJ TC does not employ one despite being hugely profitable.

This also means that no one other than the aforementioned seven or eight individuals read the paper carefully. Indeed, it is quite possible that no one else read the paper at all before it appeared in the journal. From the perspective of serious science, is actually the biggest problem in public health research evident here: not circulating a paper for comments before etching it in stone, but rather creating a “peer-reviewed journal article” out of what is effectively a superficially polished first-draft of a scientific analysis. Anyone who actually wants to get something right makes sure a lot of people read it critically before they commit to it.

Many errors in public health articles are a bit complicated, and pretty clearly happen because the authors and reviewers do not know enough science to know they were errors. Many others are pretty clearly intentional on the part of the authors, and signed off on by reviewers because they are incompetent, inattentive, and/or complicit in wanting to disseminate the disinformation. But a stupid error like this illustrates something different: Public health authors and journals are simply not even trying to do legitimate analysis.

What is peer review really? (part 9 — it is really a crapshoot)

by Carl V Phillips

I haven’t done a Sunday Science Lesson in a while, and have not added to this series about peer review for more than two years, so here goes. (What, you thought that just because I halted two years ago I was done? Nah — I consider everything I have worked on since graduate school to be still a work in progress. Well, except for my stuff about what is and is not possible with private health insurance markets; reality and the surrounding scholarship has pretty much left that as dust. But everything else is disturbingly unresolved.) Continue reading

Feynman vs. Public Health (Rodu vs. Glantz)

by Carl V Phillips

I started rereading Richard Feynman’s corpus on how to think about and do science. Actually I started by listening to an audiobook of one of his collected works because I had to clear my palate, as it were, after listening to a lecture series from one of those famous self-styled “skeptic” “debunkers”. I tried to force myself to finish it, but could not. For the most part, those pop science “explainer” guys merely replace some of the errors they are criticizing with other errors, and actually repeat many of the exact same errors. The only reason they make a better case than those they choose to criticize is that the latter are so absurd (at least in the strawman versions the “skeptics” concoct) that it is hard to fail.

Feynman made every legitimate point these people make, with far more precision and depth. Continue reading

An old letter to the editor about Glantz’s ad hominems

by Carl V Phillips

I am going through some of my old files of unpublished (or, more often, only obscurely published) material, and though I would post some of it. While I suspect you will find this a poor substitute for my usual posts, I hope there is some interest (and implicit lessons for those who think any of this is new), and posting a few of these will keep this blog going for a few weeks.

This one, from 2009, was written as a letter to the editor (rejected by the journal — surprise!) by my team at the University of Alberta School of Public Health. It was about this rant, “Tobacco Industry Efforts to Undermine Policy-Relevant Research” by Stanton Glantz and one of his deluded minions, Anne Landman, published in the American Journal of Public Health (non-paywalled version if for some unfathomable reason you actually want to read it). The authorship of our letter was Catherine M Nissen, Karyn K Heavner, me, and Lisa Cockburn. 

The letter read:

——–

Landman and Glantz’s paper in the January 2009 issue of AJPH is a litany of ad hominem attacks on those who have been critical of Glantz’s work, with no actual defense of that work. This paper seems to be based on the assumption that a researcher’s criticism should be dismissed if it is possible to identify funding that might have motivated the criticism. However, for this to be true it must be that: (1) there is such funding, (2) there is reason to believe the funding motivated the criticism, and (3) the criticism does not stand on its own merit. The authors devote a full 10 pages to (1), but largely ignore the key logical connection, (2). This is critical because if we step back and look at the motives of funders (rather than just using funding as an excuse for ignoring our opponents), we see that researchers tend to get funding from parties that are interested in their research, even if the researcher did not seek funding from that party (Marlow, 2008).

Most important, the authors completely ignore (3). Biased motives (whether related to funding or not) can certainly make us nervous that authors have cited references selectively, or in an epidemiology study have chopped away years of data to exaggerate an estimated association, or have otherwise hidden something. [Note: In case it is not obvious, these are subtle references to Glantz’s own methods.] But a transparent valid critique is obviously not impeached by claims of bias. The article’s only defense against the allegation that Glantz’s reporting “was uncritical, unsupportable and unbalanced” is to point to supposed “conflicts of interest” of the critics. If Glantz had an argument for why his estimates are superior to the many competing estimates or why the critiques were wrong, this would seem a convenient forum for this defense, but no such argument appears. Rather, throughout this paper it seems the reader is expected to assume that Glantz’s research is infallible, and that any critiques are unfounded. This is never the case with any research conducted, and surely the authors must be aware that any published work is open to criticism.

Indeed, presumably there are those who disagree with Glantz’s estimates who conform to his personal opinions about who a researcher should be taking funding from, and yet we see no response to them. For example, even official statistics that accept the orthodoxy about second hand smoke include a wide range of estimates (e.g., the California Environmental Protection Agency (2005) estimated it causes 22,700-69,600 cardiac deaths per year), and much of the range implies Glantz’s estimates are wrong. But in a classic example of “a-cell epidemiology” [Note: This is a metaphoric reference to the 2×2 table of exposure status vs. disease status; the cell counting individuals with the exposure and the disease is usually labeled “a”.], Glantz has collected exposed cases to report, but tells us nothing of his critics who are not conveniently vulnerable to ad hominem attacks.

It is quite remarkable that given world history, and not least the recent years in the U.S., people seem willing to accept government as unbiased and its claims as infallible. Governments are often guilty of manipulating research (Kempner, 2008). A search of the Computer Retrieval of Information on Scientific Projects database (http://report.nih.gov/crisp/CRISPQuery.aspx) on the National Institute of Health’s website found that one of the aims of the NCI grant that funded Landman and Glantz’s research (specified in their acknowledgement statement) is to “Continue to describe and assess the tobacco industry’s evolving strategies to influence the conduct, interpretation, and dissemination of science and how the industry has used these strategies to oppose tobacco control policies.” Cleary this grant governs not only the topic but also the conclusions of the research, a priori concluding that the tobacco industry continues to manipulate research, and motivating the researcher to write papers that support this. Surely it is difficult to imagine a clearer conflict of interest than, “I took funding that required me to try to reach a particular conclusion.”

The comment “[t]hese efforts can influence the policymaking process by silencing voices critical of tobacco industry interests and discouraging other scientists from doing research that may expose them to industry attacks” is clearly ironic. It seems to describe exactly what the authors are attempting to do to Glantz’s critics, discredit and silence them, to say nothing of Glantz’s concerted campaign to destroy the career of one researcher whose major study produced a result Glantz did not like (Enstrom, 2007; Phillips, 2008). If Glantz were really interested in improving science and public health, rather than defending what he considers to be his personal turf, he would spend his time explaining why his numbers are better. Instead, he spends his time outlining (and then not even responding to) the history of critiques of his work, offering only his personal opinions about the affiliations of his critics in his defense.

References

1. Landman, A., and Glantz, Stanton A. Tobacco Industry Efforts to Undermine Policy-Relevant Research. American Journal of Public Health. January 2009; 99(1):1-14.

2. Marlow, ML. Honestly, Who Else Would Fund Such Research? Reflections of a Non-Smoking Scholar. Econ Journal Watch. 2008 May; 5(2):240-268.

3. California Environmental Protection Agency. Identification of Environmental Tobacco Smoke as a Toxic Air Contaminant. Executive Summary. June 2005.

4. Kempner, J. The Chilling Effect: How Do Researchers React to Controversy? PLoS Medicine 2008; 5(11):e222.

5. Enstrom, JE. Defending legitimate epidemiologic research: combating Lysenko pseudoscience. Epidemiologic Perspectives & Innovations 2007, 4:11.

6. Phillips, CV. Commentary: Lack of scientific influences on epidemiology. International Journal of Epidemiology. 2008 Feb;37(1):59-64; discussion 65-8.

7. Libin, K. Whither the campus radical? Academic Freedom. National Post. October 1, 2007.

——–

Our conflict of interest statement submitted with this was — as has long been my practice — an actual recounting of our COIs, unlike anything Glantz or anyone in tobacco control would ever write. It read:

The authors have experienced a history of attacks by those, like Glantz, who wish to silence heterodox voices in the area of tobacco research; our attackers have included people inside the academy (particularly the administration of the University of Alberta School of Public Health (National Post, 2007)), though not Glantz or his immediate colleagues as far as we know. The authors are advocates of enlightened policies toward tobacco and nicotine use, and of improving the conduct of epidemiology, which place us in political opposition to Glantz and his colleagues. The authors conduct research on tobacco harm reduction and receive support in the form of a grant to the University of Alberta from U.S. Smokeless Tobacco Company; our research would not be possible if Glantz et al. succeeded in their efforts to intimidate researchers and universities into enforcing their monopoly on funding. Unlike the grant that supported Glantz’s research, our grant places no restrictions on the use of the funds, and certainly does not pre-ordain our conclusions. The grantor is unaware of this letter, and thus had no input or influence on it. Dr. Phillips has consulted for U.S. Smokeless Tobacco Company in the context of product liability litigation and is a member of British American Tobacco’s External Scientific Panel.

A give-and-take on censoring ecig research that gets almost everything wrong

by Carl V Phillips

I have watched with some amusement the swirl of attention around this op-ed (for that is what it is) by Jim McCambridge, in the journal Addiction, calling for further censorship of THR research, and this response to it in a blog post by Neil McKeganey and Christopher Russell. My amusement is first because it seems like this exchange feels like it was written 15 years ago and second because of the huge oversights by all involved. Continue reading