by Carl V Phillips
A quick finish to the theme I was working on in the last post in the series, before moving on to a definitive example of the failure of journal peer review in public health. Recall from that post Myth 2: Health science reviewers have the skills and incentive to do the job they are assumed to be doing. I expanded a lot on Submyth 2a, about how they are often simply not qualified. Here I look at the other half of this:
Submyth 2b: Journal reviewers have an incentive to do a good job.
What are the rewards for putting in the time and effort to do a good review for a journal? The answer is basically nothing. (I realize — obviously, consider how I have spent my life — that people do a lot of things for good reasons other than to get some kind of personal compensation. But that does not solve the problem. Read on.)
In theory, public health academics, like other academics, are expected to do reviews as part of their job description. Some even mention the journals they have reviewed for in their annual reports, thinking it matters (and maybe it does, some tiny bit). A few are desperate or naive enough to even put it on their CVs. Some non-academics benefit from their employers seeing them as legitimate academic-type researchers, and so are in a similar position. I am not sure anyone doing an evaluation really cares much. But the key observation is that even if they do care a lot, this is merely about doing a review; it counts the same whether the review is a single vacuous sentence or a brilliant analysis that is better than the original submission itself.
Any desire to contribute to the greater good by doing a good review diminished substantially when public health academia became an absurd rat race of chasing grants (notably this also offers no rewards for good teaching). It has been that way for a while. I remember when I was at University of Texas School of Public Health we had faculty meetings in which the Dean of Research, supposedly briefing the faculty and the administration about research, talked about nothing other than what grants had been landed or were being pursued. One time I raised my hand and pointed out that research (i.e., actually adding knowledge to the world) had not actually been mentioned at all in the long section of the meeting labeled “research”. I got lots of noises of agreement from throughout the auditorium (it was a very large faculty), but nothing ever changed. Those in power only cared about money, which is pretty pathetic given what field they went into. When you are desperate to maximize your “score” in that environment, you barely care about whether your papers are any good, let alone your reviews.
Another slightly less base, but still personally beneficial, reward from doing good work is to impress those who see it. That is why many of the best journal reviews are written by graduate students — they want to impress their advisor who assigned them to do the review. (Also, they tend to be more cutting-edge in their knowledge and have not yet experienced the cynical deadening that comes from being a professor. Stay in school, kids. Seriously, never graduate if at all possible.) This also works when a respected colleague or friend specifically asks you to review a paper he is the editor in charge of, or when you are on the editorial board of a journal.
What all of these have in common is that the reviewer cares what someone else within the production process thinks. Because the chance of impressing anyone else with your good work via a review that is anonymous and confidential is about nil. And even in cases where the reviews are not anonymous and confidential, as with the BioMed Central journals, hardly anyone looks at them. There is the obvious exception of cases when someone like me reviews the reviews, as I did earlier in this series, which might create the incentive to not screw it up too badly. On the other hand, do we really think either of the CDC employees who did those reviews I critiqued — either the one who just wrote two vacuous sentences or the one whose suggestions actually made the paper worse — is going to suffer any consequences for their poor work?
That brings up the next problem with incentives to do reviews well and honestly: When a reviewer really only cares about the political impacts the results of a paper might have, she has the incentive to “review” it based only on those. This is somewhat of a problem in any field (if you include “advancing the reviewer’s preferred theories” among worldly political implications), but it is a complete disaster in public health, and particularly in areas like tobacco. Well over half the journal reviews I have received on papers on tobacco harm reduction were basically just political opinions about the conclusions or implications (usually, but not always, anti). For my non-tobacco public health papers, that drops below half (but not by a whole lot).
But let’s take a best-case scenario and imagine that a reviewer has the scientific skills that most people in public health lack (as discussed in the previous post), has an ethical compass such that he will not let political preferences override scientific analysis, and really wants to contribute good reviews to the world in spite of meager rewards that come from doing so. For concreteness, let’s call him Carl. So does Carl relish the opportunity to do journal reviews? Absolutely not. Because he knows what happens over 90% of the time when someone writes a good and thorough review in public health. It is fairly likely that the editor just proceeds to accept the paper, and the authors either ignore any major comment or pretend to respond to it but really do not (a classic and easy tactic — editors pretty much always let authors get away with this). More likely is that the editor just rejects the paper outright (“hmm, lot of recommended changes here; must be a bad paper; no point in bothering with the details”) and the author ignores the reviewer comments, no matter how useful they might be, and sends it to another journal unchanged. So why should Carl bother?
To take one example, there is a paper in a journal that was a zillion-dollar pet project of a major tobacco company. It is seriously flawed, but in ways that could have been corrected. When talking about this paper at conferences, certain representatives of the company talk about how hard it was to get it published, implying that this is because they ran into politicized ANTZ reviews. In reality the story — which I have made no secret of — is that the first few journals it was submitted to asked me to review it, which is an obvious choice given the subject matter. I wrote a detailed report on what should be done to improve the analysis, which was a lot but was very doable, and explicitly recommended to the editors that they ask for a revision in spite of how many changes it needed, rather than rejecting it. It was potentially good and important, but just needed to be fixed. The journal just rejected it. So it went to another journal, with the authors not making the changes I recommended, and I submitted the same review (I did not have to rewrite it fortunately), and the same thing happened. I believe that repeated once more (though I am not sure and am too lazy to see if I kept a record). Then the authors submitted it to another journal who did not tap me as a reviewer — perhaps because the authors requested they not do so (they knew those detailed reviews were coming from me), but probably just because journals are fairly random about who they invite to review. There it was accepted and, as you can guess, the important needed changes I noted were never made.
You might now be asking “what is the point of this whole exercise then?” That would be the right question.