Tag Archives: peer review

Peer review of: Linda Johnson et al. (Washington U med school), E-cigarette Usage Is Associated with Increased Past 12 Month Quit Attempts and Successful Smoking Cessation in Two U.S. Population-based Surveys, NTR 2018.

by Carl V Phillips

For an overview of this collection and an explanation of the format of this post, please see this brief footnote post.

The paper reviewed here is available at Sci-Hub. The paywalled link is here.

This collection will focus mainly on the misleading anti-THR papers produced by tobacco controllers. However, it is useful and important to provide reviews of potentially important paper that might be called pro-THR. This is one example of a paper that has gotten a lot of “ha, take that!”-toned traction.

If a “pro-THR” paper is tight, a review will provide a substantive endorsement, as positive reviews should do (but as the anonymous and secret — and presumptively poor-quality — journal reviews cannot do), as well as a signal boost. If a paper is useful but importantly flawed (as in the present case), the review can correct or identify the errors and focus attention on the defensible bits. And if the paper is fatally flawed, the review should point that out. Bad advice is still bad advice when it feels like it is “on your side”. Even when a paper basically only provides political ammunition and not advice, it is important to assess its accuracy. We are not tobacco controllers, after all, who just make up whatever claims seem to advance their political cause.

—-

Johnson et al. use historical nationally-representative U.S. tobacco use data (NHIS from 2006 to 2016 and CPS over most of that period), for 25- to 44-year-olds, looking at the rate of smoking quit attempts and the association between vaping status and quit attempts or successful smoking abstinence. The authors report an unconditional increase in the population for both quit attempts (measured as a the rate of past-year incidence among people who smoke) and medium-term smoking abstinence. They also report a positive association between vaping and smoking quit attempts and abstinence at the individual level. They interpret their results as running contrary to the recent spate of “vapers are less likely to quit” claims, stating “These trends are inconsistent with the hypothesis that e-cigarette use is delaying quit attempts and leading to decreased smoking cessation.”

This is an overstatement, but the results do run contrary to the “vaping is keeping smokers from quitting” trope that the authors position their paper as a response to. This research clearly moves our priors a bit in the direction of “yes, vaping encourages people to quit smoking, and helps them do so.” Our priors only move “a bit” because rational beliefs based on all available evidence tell us we should be very confident of that conclusion already. They should instead have said something like “even if you naively believe in those methods, for this data the result is different”, but such (appropriate) epistemic modesty is absent.

The paper is quite frustrating in that the authors seem to not recognize which of their statistics are actually most informative and persuasive, let alone take the deeper dive into specific implications that could have been done. The natural experiment interpretation of some of the results is more compelling than the behavioral-association-based analysis (see below). The authors overstate the value of their association statistics and effectively endorse the same flawed methods that are the source of the “vapers are less likely to quit” literature. Continue reading

Footnote: Paper review posts

This is a prepositioned footnote to explain a series of posts I will be publishing.

I expect to soon be launching a major project that will publish a large number of proper peer-reviews of recent journal articles and some other papers in the THR space. (Fair warning to anyone planning to publish junk in the near future!) So, in order to lay in some material for that, develop protocols, learn-by-doing, and such, I am writing some entries for that collection now. Given that I am doing it, I might as well post them here. To find those posts, look in the comments section below for pingbacks.

The publications in this collection will not read like a typical blog essay, though they will be readable and reasonably free-standing, unlike a peer-review for a journal. For those familiar with the latter genre, think of them as a thorough and high-quality journal review — a rarity, I know — with a few hundred words added here and there to make it readable as an essay for someone not intimately familiar with the original paper. (And also with what would have been “the authors should fix this” phrasing changed to be phrased in terms of “the authors made this mistake”, because they also made the mistake of finalizing their paper before seeking the advice they needed to fix it.)

For those not familiar with journal reviews, just know that these pieces will not just address one or a few interesting points, in a narrative style, and not bother with the rest of the paper, as an essay would. They will have those interesting bits, but they will also step through a protocol for addressing each aspect of the paper (e.g., is the literature review in the Introduction legitimate, are the Methods adequately presented, etc.). Some of the bits will probably require reading the original paper to make sense of. For the reviews that I write, I will try to put any interesting narrative bits first, and make those free-standing. This will offer something to casual readers, and if you are not interested in the full review; you can stop reading when you get to the disjoint bits about other aspects of the paper.

That is basically what you need to know to make sense of what you are reading. Once I have the guidelines more developed, I will post a link here if you want to delve deeper. In particular, I will be recruiting freelance contributors to write reviews, so if you are qualified and interested, please take note.

Public health publishing is fundamentally unserious: evidence from a single measure of area

by Carl V Phillips

Sometimes an error matters because of its effects. Sometime it matters because what it says about its causes.

I was late to this nice piece by Roberto Sussman (a guest post at Brad Rodu’s blog) that takes down a recent silly paper out of University of California about environmental deposition on surfaces resulting from vaping exhalate. They do not actually call it “third-hand vapor”, though they all but do so, explicitly likening it to the myths (which they endorse, of course) about “third-hand smoke”. For the analysis of the science, please read Roberto’s piece, because here I am just focusing on a single gaffe and its implications.

As background, note that this that this came from the supposedly respectable tobacco controllers at UC, including Benowitz and Talbot, not the utter loons in Glantz’s shop. It was published not in some random online journal, but in the supposedly respectable flagship journal of the tobacco control movement, BMJ’s Tobacco Control.

Reading Sussman’s piece, I came across this, which he quoted from the original paper:

After 35 days in the field site, a cotton towel collected 4.571 micrograms of nicotine. If a toddler mouthed on 0.3 m2[squared meters] or about 1 squared feet of cotton fabric from suite #1, they [sic] would be exposed to 81.26 m[micrograms] of nicotine”. 

Sussman’s post is analytic, but it was written as an essay and so I was reading it fairly casually. That is, I was not trying to actively check each bit of the math as I read it, as I would when reading a research report. But even a quick glance across that passage was enough for me to trip up and notice the error. A square meter is about ten square feet, and thus 0.3 m^2  is about 3 square feet. Sussman, who was reading the original paper carefully for purposes of criticizing it, of course also caught this error and noted it in his next paragraph.

In theory this affects the thesis of the paper, which is based on the premise of a toddler sucking out all the nicotine that has accumulated in a towel that has sat untouched in a vape shop for a month. (Yes, believe it or not, that is really the premise of the analysis.) So the error means that the magical vacuuming toddler is given credit for extracting 3 ft^2 worth of accumulation by sucking the heck out of a mere 1 ft^2 of fabric.

However, this is not one of those convenient errors that creates artifactual results that matter First, every bit of this scenario is obvious nonsense, as Sussman explains, and every step grossly exaggerates the real-world exposures. And, second, even with all that, the tripled quantity is still trivial. So it is not like was the common type of “error” from tobacco control research, one done intentionally to get the result the authors want. It merely changes the result from “a silly premise that despite its huge overstatement still only yields a trivial exposure” to “a silly premise that yields an exposure that is three times as high, but is still trivial.” It is obvious that the conclusions of the paper (“environmental hazard” — i.e., landlords should be pressured to not host vape shops) were in no way influenced by the results.

In addition, it is a pretty stupid intentional “error” to make. It is a bright-line error, which appears right there in the text, as if someone had written 2+2=5. The typical tobacco controller “errors” consist of such tricks as conveniently not mentioning that a crucial variable makes the entire result go away (which only very careful readers catch), or fishing for a model that produces the most politically favorable result and pretending it was the only version of the model ever run (which is easy to detect, but impossible to prove).

No, it is clear that this was a mere goof. Someone who is not so good with numbers was thinking “a meter is about three feet, so it must be that a m^2 is about three ft^2”. Oops.

But here’s the thing: Whoever was doing the calculations for the paper made that goof, but more significantly did not catch it on further passes through the material. In other words, no one ever thought carefully about the calculations. Then someone transferred the calculation notes into text of the paper without noticing the error at that point. The other authors of the paper (there were four total) reviewed the calculation and the paper without ever engaging their brains enough to notice the error, and let it go out the door. Or perhaps they never even reviewed the calculations they were signing-off on, and perhaps not even the paper.

Keep in mind that perhaps you, dear reader, might not notice this error on a quick read. Perhaps you did not even know that a m^2 is about 10 ft^2. But anyone who does science, and is burdened with the hassle of dealing with stupid non-SI American units of measure, knows stuff like this intuitively. As I said, I noticed it without even thinking about it, just like you would notice a misspelling even though you are not actively looking for mispelings as you read. Sussman noticed it, and he is a scientist who probably never sees mention of non-SI units in his work, and who lives in a normal country that uses SI units (i.e., “the metric system”) in everyday communication. It is apparent that none of the authors of the paper ever read it as carefully as he did.

The American authors, who need to be literate in translating from American units to scientific units, should have noticed it. It is a safe bet that if prompted, “there is an error in that sentence,” they would figure it out in a few seconds. So the point here is not that they do not know the units or how to do arithmetic, but that they did not pay enough attention to their own calculations to notice the simple error. They never really cared about the calculations, as evidenced by the conclusions that are not actually supported by the results.

They were not the only ones. The reviewers and editor(s) at BMJ Tobacco Control also did not read the paper carefully enough to catch the error. As I have noted at length on this page, journal peer-review in public health is approximately useless. A generalist copy editor would probably have caught it, but presumably BMJ TC does not employ one despite being hugely profitable.

This also means that no one other than the aforementioned seven or eight individuals read the paper carefully. Indeed, it is quite possible that no one else read the paper at all before it appeared in the journal. From the perspective of serious science, is actually the biggest problem in public health research evident here: not circulating a paper for comments before etching it in stone, but rather creating a “peer-reviewed journal article” out of what is effectively a superficially polished first-draft of a scientific analysis. Anyone who actually wants to get something right makes sure a lot of people read it critically before they commit to it.

Many errors in public health articles are a bit complicated, and pretty clearly happen because the authors and reviewers do not know enough science to know they were errors. Many others are pretty clearly intentional on the part of the authors, and signed off on by reviewers because they are incompetent, inattentive, and/or complicit in wanting to disseminate the disinformation. But a stupid error like this illustrates something different: Public health authors and journals are simply not even trying to do legitimate analysis.

What is peer review really? (part 9 — it is really a crapshoot)

by Carl V Phillips

I haven’t done a Sunday Science Lesson in a while, and have not added to this series about peer review for more than two years, so here goes. (What, you thought that just because I halted two years ago I was done? Nah — I consider everything I have worked on since graduate school to be still a work in progress. Well, except for my stuff about what is and is not possible with private health insurance markets; reality and the surrounding scholarship has pretty much left that as dust. But everything else is disturbingly unresolved.) Continue reading

Feynman vs. Public Health (Rodu vs. Glantz)

by Carl V Phillips

I started rereading Richard Feynman’s corpus on how to think about and do science. Actually I started by listening to an audiobook of one of his collected works because I had to clear my palate, as it were, after listening to a lecture series from one of those famous self-styled “skeptic” “debunkers”. I tried to force myself to finish it, but could not. For the most part, those pop science “explainer” guys merely replace some of the errors they are criticizing with other errors, and actually repeat many of the exact same errors. The only reason they make a better case than those they choose to criticize is that the latter are so absurd (at least in the strawman versions the “skeptics” concoct) that it is hard to fail.

Feynman made every legitimate point these people make, with far more precision and depth. Continue reading

An old letter to the editor about Glantz’s ad hominems

by Carl V Phillips

I am going through some of my old files of unpublished (or, more often, only obscurely published) material, and though I would post some of it. While I suspect you will find this a poor substitute for my usual posts, I hope there is some interest (and implicit lessons for those who think any of this is new), and posting a few of these will keep this blog going for a few weeks.

This one, from 2009, was written as a letter to the editor (rejected by the journal — surprise!) by my team at the University of Alberta School of Public Health. It was about this rant, “Tobacco Industry Efforts to Undermine Policy-Relevant Research” by Stanton Glantz and one of his deluded minions, Anne Landman, published in the American Journal of Public Health (non-paywalled version if for some unfathomable reason you actually want to read it). The authorship of our letter was Catherine M Nissen, Karyn K Heavner, me, and Lisa Cockburn. 

The letter read:

——–

Landman and Glantz’s paper in the January 2009 issue of AJPH is a litany of ad hominem attacks on those who have been critical of Glantz’s work, with no actual defense of that work. This paper seems to be based on the assumption that a researcher’s criticism should be dismissed if it is possible to identify funding that might have motivated the criticism. However, for this to be true it must be that: (1) there is such funding, (2) there is reason to believe the funding motivated the criticism, and (3) the criticism does not stand on its own merit. The authors devote a full 10 pages to (1), but largely ignore the key logical connection, (2). This is critical because if we step back and look at the motives of funders (rather than just using funding as an excuse for ignoring our opponents), we see that researchers tend to get funding from parties that are interested in their research, even if the researcher did not seek funding from that party (Marlow, 2008).

Most important, the authors completely ignore (3). Biased motives (whether related to funding or not) can certainly make us nervous that authors have cited references selectively, or in an epidemiology study have chopped away years of data to exaggerate an estimated association, or have otherwise hidden something. [Note: In case it is not obvious, these are subtle references to Glantz’s own methods.] But a transparent valid critique is obviously not impeached by claims of bias. The article’s only defense against the allegation that Glantz’s reporting “was uncritical, unsupportable and unbalanced” is to point to supposed “conflicts of interest” of the critics. If Glantz had an argument for why his estimates are superior to the many competing estimates or why the critiques were wrong, this would seem a convenient forum for this defense, but no such argument appears. Rather, throughout this paper it seems the reader is expected to assume that Glantz’s research is infallible, and that any critiques are unfounded. This is never the case with any research conducted, and surely the authors must be aware that any published work is open to criticism.

Indeed, presumably there are those who disagree with Glantz’s estimates who conform to his personal opinions about who a researcher should be taking funding from, and yet we see no response to them. For example, even official statistics that accept the orthodoxy about second hand smoke include a wide range of estimates (e.g., the California Environmental Protection Agency (2005) estimated it causes 22,700-69,600 cardiac deaths per year), and much of the range implies Glantz’s estimates are wrong. But in a classic example of “a-cell epidemiology” [Note: This is a metaphoric reference to the 2×2 table of exposure status vs. disease status; the cell counting individuals with the exposure and the disease is usually labeled “a”.], Glantz has collected exposed cases to report, but tells us nothing of his critics who are not conveniently vulnerable to ad hominem attacks.

It is quite remarkable that given world history, and not least the recent years in the U.S., people seem willing to accept government as unbiased and its claims as infallible. Governments are often guilty of manipulating research (Kempner, 2008). A search of the Computer Retrieval of Information on Scientific Projects database (http://report.nih.gov/crisp/CRISPQuery.aspx) on the National Institute of Health’s website found that one of the aims of the NCI grant that funded Landman and Glantz’s research (specified in their acknowledgement statement) is to “Continue to describe and assess the tobacco industry’s evolving strategies to influence the conduct, interpretation, and dissemination of science and how the industry has used these strategies to oppose tobacco control policies.” Cleary this grant governs not only the topic but also the conclusions of the research, a priori concluding that the tobacco industry continues to manipulate research, and motivating the researcher to write papers that support this. Surely it is difficult to imagine a clearer conflict of interest than, “I took funding that required me to try to reach a particular conclusion.”

The comment “[t]hese efforts can influence the policymaking process by silencing voices critical of tobacco industry interests and discouraging other scientists from doing research that may expose them to industry attacks” is clearly ironic. It seems to describe exactly what the authors are attempting to do to Glantz’s critics, discredit and silence them, to say nothing of Glantz’s concerted campaign to destroy the career of one researcher whose major study produced a result Glantz did not like (Enstrom, 2007; Phillips, 2008). If Glantz were really interested in improving science and public health, rather than defending what he considers to be his personal turf, he would spend his time explaining why his numbers are better. Instead, he spends his time outlining (and then not even responding to) the history of critiques of his work, offering only his personal opinions about the affiliations of his critics in his defense.

References

1. Landman, A., and Glantz, Stanton A. Tobacco Industry Efforts to Undermine Policy-Relevant Research. American Journal of Public Health. January 2009; 99(1):1-14.

2. Marlow, ML. Honestly, Who Else Would Fund Such Research? Reflections of a Non-Smoking Scholar. Econ Journal Watch. 2008 May; 5(2):240-268.

3. California Environmental Protection Agency. Identification of Environmental Tobacco Smoke as a Toxic Air Contaminant. Executive Summary. June 2005.

4. Kempner, J. The Chilling Effect: How Do Researchers React to Controversy? PLoS Medicine 2008; 5(11):e222.

5. Enstrom, JE. Defending legitimate epidemiologic research: combating Lysenko pseudoscience. Epidemiologic Perspectives & Innovations 2007, 4:11.

6. Phillips, CV. Commentary: Lack of scientific influences on epidemiology. International Journal of Epidemiology. 2008 Feb;37(1):59-64; discussion 65-8.

7. Libin, K. Whither the campus radical? Academic Freedom. National Post. October 1, 2007.

——–

Our conflict of interest statement submitted with this was — as has long been my practice — an actual recounting of our COIs, unlike anything Glantz or anyone in tobacco control would ever write. It read:

The authors have experienced a history of attacks by those, like Glantz, who wish to silence heterodox voices in the area of tobacco research; our attackers have included people inside the academy (particularly the administration of the University of Alberta School of Public Health (National Post, 2007)), though not Glantz or his immediate colleagues as far as we know. The authors are advocates of enlightened policies toward tobacco and nicotine use, and of improving the conduct of epidemiology, which place us in political opposition to Glantz and his colleagues. The authors conduct research on tobacco harm reduction and receive support in the form of a grant to the University of Alberta from U.S. Smokeless Tobacco Company; our research would not be possible if Glantz et al. succeeded in their efforts to intimidate researchers and universities into enforcing their monopoly on funding. Unlike the grant that supported Glantz’s research, our grant places no restrictions on the use of the funds, and certainly does not pre-ordain our conclusions. The grantor is unaware of this letter, and thus had no input or influence on it. Dr. Phillips has consulted for U.S. Smokeless Tobacco Company in the context of product liability litigation and is a member of British American Tobacco’s External Scientific Panel.

A give-and-take on censoring ecig research that gets almost everything wrong

by Carl V Phillips

I have watched with some amusement the swirl of attention around this op-ed (for that is what it is) by Jim McCambridge, in the journal Addiction, calling for further censorship of THR research, and this response to it in a blog post by Neil McKeganey and Christopher Russell. My amusement is first because it seems like this exchange feels like it was written 15 years ago and second because of the huge oversights by all involved. Continue reading

SRNT believes research should be replicated (when they don’t like the results)

by Carl V Phillips

My attention was called to this gem of an editorial, “Conflicts of Interest and Solicited Replication Attempts” by the Nicotine and Tobacco Research (NTR) Editor-in-Chief, Marcus Munafò. NTR is the journal of the Society for Research on Nicotine and Tobacco (SRNT) and the slightly more honest and scientifically sound of the anti-tobacco journals. This editorial offers a new and different reflection on just how out of touch with real science tobacco controllers are. Continue reading