Sunday Science Lesson: So much of what is wrong with public health, in one short rejection letter

by Carl V Phillips

I finally got around to submitting our study about the failure of peer review in public health. (If someone wants to write a guest science lesson post about how to be more efficient about just getting things done instead of letting them languish, it would be most welcome.) We had decided to submit it first to BMC Public Health (BMCPH), the journal whose reviews and publications we studied. You might recall that we discovered that the journal reviews we analyzed were mostly content-free, or close to it, despite the many serious problems we (Igor Burstyn, Brian Carter, and I, with contributions from Clive Bates) identified in the submissions. The journal peer-review process did not manage to fix any of the major problems — not the fatal flaws that should have sent the paper back to the drawing board, nor even the simpler errors that could have been fixed with a rewrite. We decided we owed it to BMCPH to give them the chance to step up and publish the paper, and perhaps then do some soul-searching about it.

BMCPH was not a great fit. Our paper is not research in public health, but about it. We reviewed THR papers, because that is our common area of expertise, but the analysis was not about THR nor public health, but rather about the journal review process in the field. More important, publishing in BMCPH it would make the paper fairly obscure since no one actually reads BMCPH, only individual papers in BMCPH. That is, the journal is a giant unsorted catch-all, which is just fine in the search-engine age, but means that no one interested in our topic, the quality of scientific publishing, would be browsing the titles and find it as they might if it were in a more targeted collection of papers.

So we really did not want it to be accepted. However, what I expected would happen was that BMCPH would transfer the submission to the new journal about research integrity that the BioMed Central empire has launched and is trying to populate (BMC has a standard practice of moving submissions between their journals). That would have allowed BMC to step up (plus get a paper for their currently empty new journal), even if BMCPH did not, and also get the paper into a journal that people with an interest in the topic might browse.

Instead, we got an out-and-out editorial rejection. The upside of this (in addition to relieving us of any moral obligation to let BMC publish the paper) is that the content of the rejection was pure gold in terms of further illustrating what is wrong with public health publishing. The (entire) content was:

We have concerns regarding the methodology of the study, in particular the absence of an independent evaluation of the authors’ own peer review reports and an independent comparison between those reports and the original peer reviewer reports of the articles. The conclusions are therefore solely based on the authors’ own subjective opinion.

In addition, the language of the manuscript is unsuitable for a scientific article.

We also note that Carl Phillips’ role as Chief Scientific Officer of the Consumer Advocates for Smoke-free Alternatives Association (CASAA) and Igor Burstyn and Brian Carter’s positions on the Board of Directors of CASAA are not declared in the Competing Interests section.

Let’s start with the last bit, which is interesting at many levels. First, my CASAA CSO title was listed as my job description in the author list, which the reader sees long before getting to the conflicts of interest (COI) statement (if they look at such statements at all, which truth-seeking readers generally do not bother to do). I suppose occasionally someone repeats their affiliation from their title in the COI statement, but you will notice it is pretty rare. For example, none of the authors in the 12 BMCPH articles we reviewed did so. Igor and Brian’s positions on the CASAA BOD is not an interest at all, and thus cannot be a competing interest. Those are uncompensated volunteer positions, as whoever went digging for some excuse to complain about our submission undoubtedly discovered.

What’s more, this is not a paper about THR, any more than it is a paper about punctuation because it uses punctuation, and so even a paid position with a THR organization, or even with a THR product manufacturer, would have very limited importance. The editor might as well have complained that Igor did not disclose that he coaches soccer. I think there is something interesting to be gleaned here: We noted in our paper that several of the papers we analyzed were reviewed by people who were clearly unqualified to comment on the methods. For example, a (truly terrible) bibliographic analysis of journal articles about e-cigarettes was reviewed by medics who have an interest in tobacco products (which was not really needed) but no apparent understanding of bibliographic analysis (which was).

This is a perennial problem in public health, thinking that a paper is defined only by its topic area. I was reminded of this when I registered with BMC to submit the paper (probably for the tenth time — I am always forgetting my login), and the system asked for me to mark which entries on their list are my areas of expertise (so they could later bug me to review papers). The list consisted of subject-matter topics, with no option for me to note that I am an expert on epidemiologic study methods, sources of study bias, and various other scientific methods that would make me a particularly good reviewer for a huge portion of the papers they publish, regardless of the specific subject being studied. This is all a result of public health “science” having degenerated into being just about the politics, which aligns with topic area, rather than about science. (Also the only entry on BMC’s list that related to tobacco products was denoted “tobacco control”. It is a whole different level of problem that an ostensibly scientific publisher cannot distinguish between an area of scientific inquiry and a special-interest political movement.)

Returning to the silly COI complaint, take as given that the editor (unjustifiably) thinks that these omissions should be corrected. This is obviously not a reason for rejecting a submission, but merely for instructing us to include the information, which could easily have been done for a new version before sending it out for review. Now I suppose if someone blatantly omits an enormous bright-line COI, like they were commissioned to write a paper with specific conclusions or paid to ghost author a paper that was actually written by a pharmaceutical company (these are not made-up scenarios, of course) and are caught at it (which is more of a fictitious prospect), it might be appropriate to punish them with a rejection. But obviously this was nothing remotely like that.

Ironically, it turns out that there were various blatant unreported COIs that we discovered in our study and reported in our paper. These did not seem to bother the BMCPH editors who published the articles. Undoubtedly there were dozens, perhaps hundreds, of unmentioned affiliations among the authors and reviewers we studied that were as much a COI as CASAA BOD membership (or more precisely, as much a COI as such membership would be if the paper had actually been about THR). People who write academic papers inevitably serve on numerous committees related to what they are writing about, within universities, for governments, for professional organizations, and for political advocacy organizations (which also often describes the other entries on this list). These are the equivalent to CASAA BOD membership, and no one ever suggests that these be listed in COI statements.

What we are really seeing, of course, is the standard public health practice of never genuinely caring about real COI, but simply pretending to care in order to use the concept as an “-ism” for censoring anything contrary to their political views. Being associated with CASAA is not a COI at all in this case, but even if it were, it is obviously a very minor one, not something that could affect the acceptability of a submission. What the BMCPH editor was effectively saying was “you consort with people who we in public health do not like, and simply because of that association, we will not consider publishing your paper.”

It turns out that this story actually gets even more absurd. BMC was founded by an idealistic bunch who I got to know when I ran one of their journals. They eventually sold out (in every sense of the term), but BMC was once something special. (This is why you will find quite a few of my papers in their journals; one of them was even the most read paper across all of their journals for most of a year.) When they were founded, BMC created a very enlightened COI rule which emphasized disclosing political and ideological COIs, which have far greater impact than funding for all but the most blatant cases, like the examples I mentioned above. This remains the stated rule for COI disclosure at BMC journals though it is usually ignored by authors and not enforced by editors, with even the most obvious extreme ideological COIs not mentioned, as we noted in our study. But here’s the thing: we actually obeyed the rule. Our COI statement included:

The authors are all positively disposed toward THR, and thus our reviews were probably more emphatic in their objections to anti-THR polemic, bias, and disinformation than they were toward the (comparatively rare) pro-THR editorializing, but we did note that also.

This is obviously a much stronger and more useful disclosure than mentioning something like an affiliation with CASAA. The affiliation might allow someone to guess out our ideological views that might have influenced our analysis, but that would not be definitive. We directly disclosed our actual COI, eliminating any information value that might come from knowing about the CASAA affiliation because there is no need to use it to guess. We even volunteered an observation about how this might have affected our analysis. People in “public health” who talk about COI, including journal editors, simply have no idea what the concept really means, or pretend not to.

Moving on, the editor’s comments before this (same old) COI silliness are really what is most telling. Taken as a whole, one can interpret them as saying “this does not look like the stuff we usually deal with.” That is certainly true, and would have been a fair reason to ask us if we would consider porting the submission over to the research integrity journal. (One might also interpret it as saying “wow, this paper has an enormous amount of content and — as you discovered in your study — our reviewers and editors seem to put in an average of less than an hour reviewing a paper, so we just cannot deal with this.” But perhaps that would be too cynical.)

The fact that health science publishing does not have room for an analysis like ours is representative of a serious general problem. The churning machine that cranks out research reports and has room for nothing else (other than a few non-scientific papers that are functionally op-eds) explains a large portion of what is wrong with health science fields, and with popular press reporting about the fields, and why trust in the health science academic literature is misplaced. This culture creates a collection of unexamined contradictory monologues, in the form of research reports with tacked-on analysis-free commentary, which are then just cherrypicked or reported in isolation in pursuit of political or corporate goals. There is close to zero actual scientific analysis of what those study results all mean. What passes for attempts to bring together the isolated monologues, like meta-analyses, actually makes the problem worse.

Those of us who have tried to do serious analysis of previous bits of monologue in public health, such as reanalyzing or critiquing previous work, find that there is simply no market for it. Readers of this blog are probably familiar with attempts to respond to anti-THR junk science, and even to legitimate errors, within the academic journals system. You probably know that such efforts are basically pointless, epitomized by spending a lot of effort to get a journal to publish a ridiculously short letter that everyone just ignores anyway. Trying to get a robust critical analysis into a journal is even more difficult.

You may not realize that this is not limited to the hotly politicized topics in health research; it is a problem in the field that exists independent of the worldly politics (though it makes the problems created by the politics far worse). One of my papers that I am most proud of relates to a discovery, by a research team I was part of, that H.pylori (popularly known as the stomach infection that causes ulcers) was spontaneously eliminated (went away without treatment) in many of our study subjects. This was contrary to the then-standard doctrine (which still persists in medical circles to this day) that H.pylori never spontaneously eliminates. The journal that published our study invited a commentary from someone from the old guard, who just asserted, without any substantive analysis, that our result must have been due to measurement errors. My paper was a complicated serious analysis that ran the numbers and showed that this explanation was utterly implausible. But getting it published — even though this took place in the pages of one of the few reasonably-scientific epidemiology journals and not a standard public health or medical journal — was like pulling teeth. What’s more, the current Google Scholar count of citations to these publications is 65 for the original field study, 17 for the “oh, that was just measurement error” commentary, and only 9 for my analytic debunking of the commentary (probably all of which are from authors of the original field study). These numbers probably all understate the actual total citations (I know the first does), but the relative numbers probably track the real totals. Actual analytic dialogue is basically ignored in health science.

Returning to the specifics of the rejection letter, the “language of the manuscript is unsuitable for a scientific article” line is always one of my favorites. I see it a lot because I do what people do when they want to make clear what they are claiming: communicate clearly, without euphemism and pointless jargon, spelling everything out in understandable language. Whenever I challenge someone who recites that line, asking him to identify what he objects to, he points out one or two sentences — almost never five — that use language he thinks is too colloquial. Needless to say, it is easy enough for me to just change those few words.

I assume this leaves such commentators unsatisfied (perhaps fuming) but unable to say anything more. Of course, what they were really trying to say is “you wrote this in a style that effectively communicates your substance to anyone who reads it, and you actually lay out and argue your claims; how are we supposed to maintain our reputation as a priestly caste without sciencey language and bald simple assertions?” If you ask any serious scientist what are the best bits of modern science he has ever read, the answer will almost always be works that would provoke one of those “unsuitable language” complaints. (My list includes such works by Schelling, Feynman, and Dawkins.) This may seem like a relatively minor point, but it is actually a perfect symptom of a huge problem: Health science publishing is ultimately about form — you might even call it ritual — rather than scientific thinking and analysis. As we note in the paper, anything that fits the standard format and obeys the artificial style rules, no matter how terrible the analysis and unsubstantiated the conclusions, will be published in a health science journal if the authors want it to be. Anything that will not fit that format will have a very hard time finding its way into those journals, no matter how informative it is.

You might have noticed that, so far, the stated bases for our rejection consist of cosmetic points that would be trivial to fix. So now let’s consider the only substantive comments about the paper, found in the first paragraph.

Those of you familiar with the paper will recall that the three authors each independently wrote peer-review reports for the submission versions of each of the 12 articles in our dataset (with Clive substituting for a couple of them for reasons explained in the paper). Then we summarized the content from these and from the reviews written by the journal’s own reviewers into a bullet list. We then analyzed the differences between our reviews and the journal’s reviews. Is this the best anyone could have done? Obviously not. It is always possible to do more for any study. In a world of unlimited resources (and patience), we would have had three or ten more people playing the same role.

But notice what the editor’s complaint really says: If I had refrained from writing reviews, limiting my role to analyzing the reviews that Igor, Brian, and the journal reviewers wrote, and Igor and Brian contributed nothing to that analysis, then this magical “independent” reviewer would have been me. But under that scenario, we would have had only two new sets of reviews which were each evaluated by only one other researcher. Instead, the way we did it, we had three new sets of reviews, and each of them and the journal reviews were evaluated by three researchers. (This included each of us playing a role in summarizing his own review, which I am guessing the editor objects to. But what could be wrong with that? Who better to write a bulleted summary of a few pages of prose than the author of that prose?) I cannot figure out how anyone could possibly think what we did — generating the maximum possible analysis with the available resources — is inferior to limiting ourselves to a subset of the same analysis.

The main takeaway from the criticism is trivial: Presumably the editor did not think hard enough to figure this out before writing the comment. But there are some broader implications also. The editor probably did not think it through because she had some rule-of-thumb notion in mind about qualitative research (“rule. says. must. have. designated. independent. evaluator.”). That faux-rule works great for academics who are trying to give all their friends a chance to do a few hours’ work and get credit as coauthors, but we had to be more efficient. Moreover, it is fairly absurd to imply that we violated some standard practice of research methodology, because our methodology does not have any standard practices. It was a rare (perhaps unique) effort to address something that is almost never studied. It is fine to say “X is a substantive fault which would have been avoided had you done Y differently.” But it is absurd to say, in effect, “you did not follow the rule” when there is no precedent (and indeed, it is also lame to say when there is standard practice that the authors actively chose to avoid following, given how bad the standard practice often is).

Moreover, in the case of our paper, everything someone would need to identify a substantive fault is present. This contrasts sharply with all of the qualitative research papers that we analyzed in our study, which BMCPH did publish. For those, the reader (and thus the journal reviewers) were forced to just take the authors’ word that they had accurately summarized the interviews they were reporting about. They published only extremely simplistic summaries (with some obvious biases) and cherrypicked individual quotes (ditto). In one case we described the use of quotes from the interviews in a paper as reading more like a motivational pamphlet than an analysis of the actual content of the interviews.

Unlike the interview transcripts from the BMCPH-published papers, the reviews from our study are all published with the working paper and were provided to the journal in the submission. If any reader — including, say, the editor of the journal or the reviewers she could have sent the paper to — thinks we did not evaluate the content correctly, this would be easy to substantiate. No speculation or appeal to faux-rules is necessary. On the other hand, under that scenario where I alone did the analysis of the reviews, or if we had a fourth poor soul who was willing to do that work, would that magically mean the summary was better than our collective effort, let alone perfect? Obviously not. It is not as if that individual could be a tabula rasa. He would have to familiarize himself with the content of the journal submissions to make any sense of the reviews, at which point (assuming he was qualified to be doing this work) he would have formed his own opinion about what to say about the submission. That is, he might as well have gone ahead and written his own review for the dataset, putting us back to where we started, with all authors playing all roles. It should come as no surprise that we carefully constructed our methodology based on our assessment of what was the best option for this unusual analysis.

That leads us to the most pernicious and harmful of the many harmful themes in the editor’s review. It bears repeating:

The conclusions are therefore solely based on the authors’ own subjective opinion.

Really? Conclusions based solely on authors’ own subjective opinions? What an awful turn of events! Oh, wait. That does not seem like such a problem given that all conclusions from all scientific analysis are based solely on the authors’ own subjective opinion!! (I actually had to pause while writing this because I was LOLing about the editor’s comment, and not for the first time.)

It so happens that I wrote a bit about this a few posts ago, riffing off of something Igor wrote. I hope it is obvious to my readers that there is basically nothing in science that is not ultimately about researchers’ human (subjective!!!) judgment, from how to design an experiment or a data collection method, to how to analyze the data, to what conclusions to draw from that analysis. I notice that the more that the people in a field try to deny that this is true, trying to pretend (or deluding themselves) that they employ “objective” methods, the poorer excuse for scientific inquiry that field is. At one end of that spectrum, we have serious scientists in serious fields who are always aware that they are not actually being handed data and conclusions from God Himself, and thus engage in serious debate about research methods and results, demand replication, push hard to identify and test possible alternative explanations for their observations, and otherwise try to minimize the (inevitable) errors that come from human imperfection. At the other end, we have health researchers and publishers, like the case in point, as well as the associated “science” reporters and activists/officials, who intentionally try to create the illusion (and delusion) of “objectivity” to cover for the extreme frailty of their entire enterprise.

The irony is once again palpable. We identified blatant assertions of personal political opinions, stated as if they were fact, in most of the articles in our study, and these were published by BMCPH. We identified conclusion statements that in no way followed from, or could follow from, the study results in all of the empirical study reports we analyzed. While all science is subjective, that does not mean you can just say whatever you want and pretend it is science. (Do I recall someone mentioning “language unsuitable for a scientific article”?) The trope that there is anything in science that is not “subjective” is embraced in public health for the very reasons we identify in our study: Public health “science” is frequently produced and published without any scientific scrutiny, and it would not stand up to any such scrutiny, and so the field relies on tricking people into believing in their magical “objective” methods in order to discourage scrutiny. If that were to fail, the whole enterprise would collapse.

In conclusion, I thought that submitting the paper to BMCPH was time wasted as a mere virtue gesture. Little did I know what a great educational exercise it would turn out to be.

 

11 responses to “Sunday Science Lesson: So much of what is wrong with public health, in one short rejection letter

  1. Scandinavian Journal of Public Health maybe? sjpheditorial@sagepub.com. Ingvar Karlberg of Sahlgrenska University Hospital is E-i-C. They might use profanity in a rejection letter, that could be fun?

  2. What does BMC stand for?

    • BioMed Central. It is just the name of the publisher, or more specifically denotes the series from said publisher that the publisher decided to put its name on, like such journals with “BMJ” or “Nature” in them. Of course the ultimate owner is the Evil Empire (Elsevier), which acquired BMC when the idealism wore out.

  3. In the good old days when BMC was first set up, and I was one of the Associate Editors, the aim was to increase the amount of research that got into the public domain. The only reason for an outright rejection was if the study was methodologically flawed. Obviously things have changed, and apparently not for the better.

    • Carl V Phillips

      Yes, there were some good old days, weren’t there? That was the policy of my journal (along with the paper needing to be a fit for the mission, of course), and we worked hard with authors to fix papers rather than reject them.

      And, yes, things have changed. This story provides evidence in both directions: going ahead and publishing studies that are fatally flawed, and rejecting those that are not. Since that last bit is just a single observation in this story, I will add that I have occasionally reviewed for them over the last few years, and invariably the result of my thorough reviews (i.e., I point out what needs to be done to fix the paper) is that they journal just rejects it (even though I generally urge the editor to ask for a revision instead).

      Good science is hard work. Who wants to bother with that?

  4. Carl
    Wendell Berry (i think) coined the phrase “the cutting edge of, conformity”, seems pretty apt :
    “The religion of professionalism is progress, and this means that, in spite of its vocal bias in favor of practicality and realism, professionalism forsakes both past and present in favor of the future, which is never present or practical or real. Professionalism is always offering up the past and the present as sacrifices to the future, in which all our problems will be solved and our tears wiped away – and which, being the future, never arrives.”

    “The anti-smoking campaign, by its insistent reference to the expensiveness to government and society of death by smoking, has raised a question that it has not answered: What is the best and cheapest disease to die from, and how can the best and cheapest disease best be promoted?”

    Which makes me ask : What is the cheapest, most efficient disease and how to promote it ?

  5. Have you considered submitting to the new BMC Journal of Research Integrity and Peer Review http://researchintegrityjournal.biomedcentral.com/ ?

    Also, I think it obvious that BMC would look for reasons, even cosmetic reasons, not to publish a paper that says to them “your credibility is bust” – so don’t give them any.

    • Carl V Phillips

      We considered it. As noted, I figured they would port it off to there. But right now we are looking at a journal in that space by another publisher who does not have the incentive to come up with an excuse to spike it. The “don’t give them any” can never be accomplished — there is always something about a paper that someone else would have done differently or can complain about if they are intent on denigrating it.

      Indeed, this brings up an interesting contrast. In non-scientific academic fields, there is a struggle to get a paper into a journal. Since that rat race is the only measure of “productivity” there, it is a desperate effort. It generates all those cliches about academic peer-review — reviewers effectively rejecting papers based on “that is not exactly what I would have done” or “this analytic approach is not flawless”. That can always be done if someone wants to. We are not used to it in health sciences, because the problem here is a lack of scrutiny. But if you submit a serious analytic paper or something politically incorrect, you run into it again.

  6. OT: I thought you’d like to know about this. Beta- and gamma-HPVs have been implicated in head and neck carcinomas, by the National Cancer Institute, no less.
    http://www.ncbi.nlm.nih.gov/pubmed/26794505
    And as the proportion of cancer known to be caused by infection rises, it casts the blame of tobacco into further doubt.

    • Yes. Thanks. I actually started a paper on that very topic — on how the claims blaming tobacco products have remained unchanged even as it has become clear that HPV is causing a lot of pharynx and esophageal cancer (perhaps not oral cavity, though). Not sure I will ever finish it, though.

  7. Pingback: What is peer review really? (part 1) | Anti-THR Lies and related topics

Leave a comment