Post publication peer-review: Correction to Burstyn (2014) and related matters

by Carl V Phillips and Igor Burstyn

[Igor Burstyn is an Associate Professor at Drexel University School of Public Health and a member of the CASAA Board of Directors. His research that is described here was sponsored by CASAA.]

Burstyn (2014), “Peering through the mist: systematic review of what the chemistry of contaminants in electronic cigarettes tells us about health risks”, BMC Public Health, is probably the most read scientific paper on e-cigarettes and among the most read in the history of tobacco harm reduction. It is often described as the most important paper on e-cigarettes, being the first to point out that there is ample existing evidence that the non-novel chemical exposures from vaping – which are used to concoct alarmist propaganda — are inconsequential. So imagine our surprise when, after well over 100,000 people had viewed the paper at the journal’s website and countless more via other means (the announcement of the publication of the working paper version remains the most read post on this blog), it went through journal peer-review, and each of us poured over many revisions, one astute reader caught a bright-line error in the results. It is recounted in the following text by IB:

I am thankful to Dr. Zvi Herzig for noticing an error in units in one result reported in the paper.  The sentence “Assuming extreme consumption of the liquid per day via vaping (5 to 25 ml/day and 50-95% propylene glycol in the liquid), levels of propylene glycol in inhaled air can reach 1–6 mg/m3” should read “… levels of propylene glycol in inhaled air can reach 1–6 g/m3”.  This strengthens the stated conclusion that “… estimated levels of exposure to propylene glycol … warrant concern.”  The corrected calculation was one of several that were used to draw this conclusion.  It was a worse-than-worst-realistic-case scenario and would have to be reconciled with measurements of emissions, and thus should not be considered a realistic quantification unless further measurements change our assessment of what constitutes a realistic scenario.  The corrected estimate suggests greater caution is warranted than the original estimate, but is still not cause for alarm.  It implies that we should be doing more active research to understand the effects of inhalation exposure to propylene glycol at levels higher than those that have been studied in the past, if the predicted exposures are indeed verified by measurements.

I sincerely apologize to my readers for the error and am thankful for such attentive readership.

To clarify that, this was a back-of-the-envelope worst-case scenario estimate of PG exposure. The original erroneous figure led IB to conclude that the PG exposure level was tending high enough — as compared to reference levels that are known, based on existing evidence, to pose no concern — that caution and research about this level of exposure is warranted. The correction means that in this worst-case scenario, the PG exposure is actual very high compared to the reference levels. This does not mean that it is hazardous – no exposure level for PG is known to be hazardous. But it does put it beyond the range of what we have much evidence about, and so our confidence in it being inconsequential should be substantially reduced pending further research. Note that this refers to the exposure of the vaper herself; the exposure of bystanders is still a couple of orders of magnitude below the level that would cause any concern.

Just this much of the story is a very educational illustration of the scientific and publishing process. Papers contain errors. It happens even when the authors are skilled and honest (and obviously much more often when they are not). Serious authors circulate versions of their paper for comments and recruit at least a few serious readers to closely examine a paper before the journal version is finalized, as occurred in this case. The journal review itself adds further review, though despite the fetishization of it, it is considerably less important that the prior step. But even then, errors remain and some among the many readers – and there are almost always far more readers than there were during prior stages in the paper’s life – notice them. This is often called “post-publication peer-review”, though that term makes rather too big a deal about the moment the paper appears in a journal, which is really just a step in the middle of a long path of the scientific community assessing the validity of research.

To make post-publication peer-review work, there needs to be a way to prominently make corrections. Immediately upon being notified of the error, IB composed the above correction and wrote to the journal asking that it be attached to the paper as an erratum. The journal refused to do this because “we are not able to publish an erratum article in cases where the conclusions of the paper are not altered.”

This is a truly remarkable statement and illustrates much of what is wrong with public health science, even setting aside the fact that moving from expressing a bit of concern to having substantially greater reason for concern is a change in conclusions. The specific results of research matter. A major change in a result that is central to the aim of the study (and this was such) matters. Some later readers who build upon the existing research might build on that particular result; it is not the role of researchers or journals to decide or even predict how results will be used. Serious readers of research do not much care about what an author stated as his personal assessment of the exact implications of the research. Indeed, unless the goal is to assess the accuracy of authors’ claims, we often do not even read the conclusions (or introductions) of papers we read. But the journal basically declared that only the author’s stated conclusions matter. They are probably right in judging that most readers in public health make the serious mistake of focusing only on such conclusions, but a journal should be part of the serious scientific process rather than catering to that regrettable practice.

Fortunately, BMC journals have a comment page attached to each article they publish. Upon having the request for an erratum refused, IB submitted the above text to the comment page.

After most of a month it has still not appeared and there has been no explanation. At the time of this writing, there is no evidence the journal is interested in correcting an error in the calculations, the essential part of this paper that will transcend the author’s polemics and interpretation. This suggests that the science that is at the heart of the matter is deemed by the journal (and perhaps much of the field of public health) to be less important than peering into the opinions of the pundits. We would be happy to be persuaded that this impression is wrong.

Unfortunately, even assuming this inexplicable delay in posting a comment (which in our experience usually takes only a few days) is eventually resolved, the comment – like letters to the editor in journals that adhere to that antiquated system – will probably have little impact. This further reflects the unfortunate behavior of consumers of public health literature (and further emphasizes the harm caused by erroneous journal articles): few of those reading the paper go so far as to click on the comments link in the sidebar beside the article, let alone seek out letters to the editor, blog posts, and other useful post-publication review material. That includes not just casual consumers, for whom this is understandable, but ostensibly serious consumers like other researchers or regulators.

A similar example of this is Phillips (2009), “Debunking the claim that abstinence is usually healthier for smokers than switching to a low-risk alternative, and other observations about anti-tobacco-harm-reduction arguments”, Harm Reduction Journal (another BMC journal). That paper is far less read, though its implications are arguably more important for THR than Burstyn’s (we will spare our readers a debate about that point). In common with Burstyn, it focuses on an observation with huge practical implications that was obvious to a few observers but remarkably overlooked. In this case is was addressing the unchallenged but, it turns out, false claim that quitting tobacco products entirely is a healthier option than THR.

That is only true if the quitting entirely would have occurred at the same time as adopting THR because any risk from smoke-free alternatives is so low that delaying smoking cessation by only a matter of weeks causes more risk than a lifetime of using the smoke-free alternative. At the heart of the paper was a back-of-the-envelope calculation to estimate roughly how long a delay in quitting would be break-even for a random smoker, assuming the risk of the alternative product was 1% that of smoking and several other assumptions. The version that was published put the point estimate for this very rough estimate at 1 month. But it turns out due to a calculation goof, this was wrong and the point estimate should be 2 months. This was caught not by the readers of draft versions or the journal reviewers, but by an interested expert reader quite a while after the journal version was published. Details can be found in the comment CVP posted as a correction.

The problem is that no one seems to read that comment. Whenever the paper is rediscovered, the 1 month claim is what gets broadcast. Article versions of papers are simply given too much deference, our own included. The scientific record is full of additional information.

In a practical sense, the 1 vs. 2 months issue does not matter because the point estimate is an inherently imprecise estimate based on critical assumptions and a rough method of calculation. It is the qualitative message — that any very short delay in smoking cessation is more harmful than any risk from alternative products — that is important. Thus it is really the case that striving for complete tobacco cessation is not a safe alternative to THR! Similarly, the exact magnitude of the corrected figure from the Burstyn paper is not what is important; in both cases, they are sufficient to make the point even if they are wrong by a factor of five.

A final notable observation is that neither of these patent errors was identified by the ANTZ who seek to discredit these results, but rather were identified by readers who appreciated the results. Tobacco controllers simply do not have the scientific skills to notice real errors in science, as evidenced by just how incredibly bad their own “research” is. What passes for critique of science in their world consists of nothing but ad hominem attacks and absurdly sputtering, in effect, “but even with this, we still don’t know everything, so our personal political views must still be right.” They are very at home in a field where people care more about authors’ asserted conclusions rather than what the results of a study actually showed (which those conclusions are often unrelated to). Moreover, they are quite at home in a field where correction of errors in journal articles is left entirely to proactive efforts by the authors, who need to be skilled enough to recognize the error and honest enough to want to correct it. Even though ANTZ papers are rife with errors, when was the last time you saw one of them volunteer a correction, let alone make an effort to publicize it?

[Update 12 Jun 15: Hours after this post was published, the comment finally appeared attached to the Burstyn paper. Apparently someone reads this blog :-).]

6 responses to “Post publication peer-review: Correction to Burstyn (2014) and related matters

  1. Pingback: Breaking News: New study shows no risk from e-cigarette contaminants | Anti-THR Lies and related topics

  2. Pingback: Post publication peer-review: Correction to Bur...

  3. Thanks, Igor and Carl,

    I would add that:

    – there’s no chance of systemic toxicity. According to European Medicines Association intravenous PG is considered “safe whatever the duration” up until 500 mg/kg body-weight (http://www.ema.europa.eu/docs/en_GB/document_library/Report/2014/12/WC500177937.pdf).

    – even with regards to pulmonary effects, animal research on PG exposure didn’t find any probissues, which suggests that there shouldn’t be serious prob;ems at higher levels.

    – an animal experiment verifying the safety of PG exposure relevant to usage levels of an Altria cigalike was presented publicly at an FDA hearing.

    – evidence exists for the relative pulmonary safety of vaping (http://www.biomedcentral.com/1741-7015/13/54, http://www.univadis.com/conference-reports/10/SRNT-E-cigarettes-may-result-in-an-improvement-in-lung-function#?, http://www.atsjournals.org/doi/abs/10.1164/ajrccm-conference.2015.191.1_MeetingAbstracts.A4686):

    – the 19,000 EC forum member survey by Farsalinos found that mean daily consumption is 3 ml — much lower than the 25 ml W.C.S. by Burstyn.

    – the 4 m^3 for 8 hr work-shift is very conservative, see http://www.ecigarette-research.com/web/index.php/research/2015/190-niosh-da

    – two studies are widely cited incorrectly as suggesting PG may represent an inhalation issue, namely:

    1) Choi et al is cited incorrectly as stating that chronic exposure to PG in indoor air may exacerbate or induce rhinitis, asthma, eczema and allergic symptoms in children.

    That study found an association association between propylene glycol + glycol ethers and asthma in children, but reported:
    “…apparently elevated likelihood of the present outcomes was not driven by propylene glycol… the estimated risks after excluding the propylene glycol were almost identical as those based on the original definition of the PGEs.”
    http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0013423&representation=PDF

    2) Varughese et al, “Effects of Theatrical Smokes and Fogs on Respiratory Health in the Entertainment Industry,” is cited incorrectly as showing that chronic exposure to PG fog causes decreased lung function. However Varughese et al explicitly write, “it was not possible to distinguish the role of glycol or mineral oil fogs.”
    Furthermore, Moline et al in “”Health Effects Evaluation of Theatrical Smoke, Haze, and Pyrotechnics” indeed distinguished between glycol and mineral oil. They found that only mineral oil and not glycols reduce lung function.

  4. 30mls (at 24mg strength) lasts me about 2 weeks and I thought i was a fairly committed user, more than 10mls a day sounds like a lot.

  5. Pingback: New Phillips-Burstyn-Carter working paper on the failure of peer review in public health | Anti-THR Lies and related topics

  6. Pingback: What is peer review really? (part 1) | Anti-THR Lies and related topics

Leave a comment