Sunday Science Lesson: 13 common errors in epidemiology writing

by Carl V Phillips and Igor Burstyn

[Note: The following is a guide to writing epidemiology papers that Igor distributed to his students last week. He started drafting it and brought me in as a coauthor. While this written as a guide for how to do epidemiology – the reporting part of doing it – it serves equally well as a guide to identifying characteristics of dishonest or incompetent epidemiology writing, which will be of more interest to most readers. Below that are my elaborations on some of the points; Igor reviewed those to make sure there was nothing he clearly disagreed with, but that part is mine alone. –CVP]

Commonly committed errors by epidemiologist and public health professionals that you must avoid in your own writing and thinking in order to practice evidence-based public health: The Unlucky 13

By Igor Burstyn and Carl V Phillips

1. A research study, unless it is a policy analysis, never suggests a policy or intervention. Policy recommendations never follow directly from epidemiologic evidence, and especially not from a single study result. If someone wants to publish your policy op-ed, go for it, but do not treat a research report as an opportunity to air your personal politics.

2. “More research like this is needed” is never a valid conclusion of a single paper that does not perform any analysis to predict the practical value of future research. More research will always tell us something, but this is always true so not worth saying. “Different research that specifically does X would help answer a key unanswered question highlighted by the present research” is potentially useful, but requires some analysis and should be invoked only if there is nothing more interesting or useful to say about the research itself.

3. Conclusory statements (whether in the conclusions section or elsewhere in the paper) must be about the results and their discussion. If concluding statements could have been written before research is conducted, they are inappropriate. If concluding statements are based on something that is not in the analysis (e.g., normative opinions about what a better world would look like), they must be inappropriate. If concluding statements from a research report include the word “should”, they are probably inappropriate.

4. Citing peer-reviewed articles as if they were facts is wrong: All peer-reviewed papers contain errors and uncritical acceptance of what others concluded is equivalent to trusting tabloids. Even the best single study cannot create a fact. Write “A study of X by Smith et al. found that Y doubled the risk of Z” not “Y doubles the risk of Z”. And make sure that “X” includes a statement of the exposure, outcome, and population – there are no universal constants in epidemiology.

5. Never cite any claim from a paper’s introduction, or from its conclusory statements that violate point 3. If you want to just assert the claim yourself, that might be valid, but pretending that someone else’s mere assertion makes it more valid is disingenuous.

6. Avoid using adjectives unless they are clearly defined. A result is never generically “important” (it might be important for answering some particular question). Saying “major risk factor” has no clear meaning. In particular, avoid using the word “significant” either in reference to a hypothesis test (which is inappropriate epidemiologic methodology) or in its common-language sense (because that can be mistaken for being about a hypothesis test).

7. Avoid using colloquial terms that have semi-scientific meanings, especially inflammatory ones (e.g., toxin, addiction, safe, precautionary principle), unless you give them a clear scientific definition.

8. Do not hide specific concerns behind generalities. If a statement is not specific to the problem at hand, do not make it, or make it problem-specific. For example, it is meaningless to write in an epidemiologic paper that “limitations include confounding and measurement error”, without elaborating on specifics. You should instead explain “confounding may have arisen because of [insert evidence]”, etc.

9. Introductions should provide the information that the audience for your paper needs to know to make sense of your research, and nothing more. If a statement is common knowledge for anyone familiar with your subject (e.g., “smoking is bad for you”), leave it out. If a statement is about the politics surrounding an issue, and you are writing an epidemiology report, leave it out. If there is previous research that is useful for understanding yours, put it in (and not just a few cherrypicked examples – cover or summarize all of it).

10. Your title, abstract, and all other statements should refer to what you actually studied; if you were trying to proxy for something else, explain that, but do not imply that you actually observed that. For example, if you studied the effects of eating selenium-rich foods, refer to the “effects of eating selenium-rich foods”, not the “effects of selenium”.

11. Never judge a study based on its design per se but instead think about whether a design adopted is appropriate for answering the question of interest. There is no “hierarchy of study types” in epidemiology (e.g. there are cases where ecological study is superior to randomized controlled trial).

12. Unless you are analyzing the sociology of science, never discuss who did a study (e.g. what conflicts of interest you perceive or read about) because this leads into a trap of identity politics that has no place in science. Focus on quality of a particular piece of work not the authors’ affiliation/politics/funding.

13. Even as science seeks truth, it is still merely a human activity striving to reduce the range of plausible beliefs (read more on “subjective probability”, if interested); realize this about others’ work and do not try to imply otherwise about your own work. It is acceptable to present an original thought or analysis that requires no references to prior work. Indeed, if you are saying something because it was an original thought, searching for where someone already said it just to have a citation for it is misleading and deprives you of a well-deserved credit. Likewise, write in active voice (e.g. “I/We/Kim et al. conducted a survey of …” rather than “A survey was conducted …”) because it is the work of identifiable fallible humans, not the product of anonymous and infallible Platonic science.

——

Additional observations by CVP on some of the points:

Point 3 (including point 1, which is a particularly common and harmful special case of 3), along with point 9, represent the key problem with public health “research” in terms of the harm it does in society. Expert readers who are trying to learn something from an epidemiology study simply do not read the conclusions or introductions of papers. They just assume, safely, that these will have negative scientific value. However, inexpert readers often only read the abstract and conclusions, perhaps along with the other fuzzy wordy bits (math is hard!). This would not be a problem if those bits were legitimate analyses of the background (for the introduction) and of the research (for the discussion and conclusions). But that is rarely the case. Vanishingly rarely.

Instead, the introduction is a usually high-school-level report on the general topic, usually fraught with broadside-level simplifications and undefended political pronouncements, and the conclusions usually are not related to the actual study. As specifically noted in the guide, concluding statements to an epidemiology research report should never include any policy suggestions because there is no analysis of policy. There is no assessment of whether a particular policy or any conceivable policy would further some goal, let alone of all the impacts of the intervention. More important, there is not any assessment of the ethics or political philosophy of the particular goal, so there is no basis for normative statements. Having the technical skill to collect and analyze data conveys no special authority to assert how the world should be. None. NONE!!! (I just cannot say that emphatically enough.)

Indeed, the entire FDA e-cigarette “deeming” regulation (as drafted, and presumably also the secret final version) can be seen as a perfect example of these problems. It basically says: “Here are some [dubious] observations about characteristics of the world. Therefore we should implement the proposed policy.” As with a typical research paper, they do not fill in any of the steps in between. Why do the supposed characteristics of the world warrant any intervention? Why should we believe the policy will change those characteristics? What other effects will the particular intervention have? How do the supposed benefits compare to the costs? Literally none of these are considered. How can regulators get away with that? Because they are acting in a realm where policy recommendations are always made without considering any of that.

One technical note about point 3, the last sentence: That is phrased equivocally because a “should” statement can be about the actual research. E.g., “These results show that anyone who wishes to make claims about the effectiveness of a smoking cessation method should try to account for confounding and try to assess the inevitable residual confounding.” (Note that I do not actually know a paper that includes that conclusion, which is really too bad, but it could be a conclusion of our second-order preferences paper.) But it is the rare “should” statement that is actually about the implications of the research, rather than some unexamined political preference.

Those points then extend into point 5 (point 4 is more of a corollary to what comes below). Because there is basically no honesty control in public health or quality control in public health journals, unsupported conclusions are frequently cited and thereby turned into “facts” by repetition. It is never safe to assume that any general observation or conclusory claim that is cited to a public health paper is actually supported by the substance of that paper. In my experience this is only true about 1/4 of the time.

Point 13, about the subjective nature of science, is subtle and only touches of the surface of deep philosophical, historical, and sociological thinking. But it may be among the most important for protecting non-experts from the harms caused by dishonest (or just plain clueless) researchers. Scientific inquiry is a series of (human) decisions intermixed with a series of (human) assessments. For many mechanical actions – reading a thermometer or doing a particular regression – we have some very tightly drawn methods that (almost) exactly define an action and constrain subjectivity. But for many others, it is all about choices. Which variables were included in the regression, and in what functional form? That is a human choice. (Ideally, if there are multiple sensible candidates for those, they should all be tried, and all the results reported. Failure to do that is a candidate for the intermediate-level guide to common errors, rather than this basic one. In any case, which of the many results to highlight is still a choice.) Any author who tries to imply that their choices were other than choices — or worse that there were no other choices available — is trying to trick the reader.

Consider the discussion of meta-analysis from a week ago. The authors of the original junk meta-analysis, in their various writings, aggressively tried to trick readers in this manner. But many of the critics did no better, trying to claim that there are some (non-existent) bright-line rules of conduct that the original authors violated. The big-picture problem I noted is that there is no scientific value in calculating meta-analysis statistics in this situation, and rather obvious costs, and that ever trying to do so was absurd. But set that aside and consider the details: If you are going to do a meta-analysis, the choice of which studies to include is subjective. Should you exclude studies that obviously have huge immortal person-time selection bias problems? Honest researchers would generally agree (note: a human process) that you should. But what about studies that apparently have a little bit of such bias? Similarly, supposedly bright-line rules require subjective interpretation: Typical lists of rules for meta-analysis say that we should not include any study that does not have a comparison group. Ok, fine, but what if the study subjects are very representative of a well-defined population, and the authors use knowledge about that population as the comparison? What if it the subjects are less representative of that population, but the researchers still make the comparison? What if the researchers ostensibly included a comparison population in the study, but the two populations are too different to legitimately compare?

Similar confusion — naïve beliefs that standard approaches are not subjective choices, or that merely stating a standard creates a non-subjective bright line — can be found everywhere you look in and around epidemiology. If you have ever seen someone invoke “causal criteria”, it is undoubtedly an example of this. Those lists are not criteria, but merely suggestions for points to (subjectively) consider when assessing causation; they contain no rules. Use of logistic regression is the default method for generating statistics in epidemiology, but the assumptions it embeds are not necessarily reasonable and its use is a choice (perhaps driven by the researchers not knowing how to do anything else, but that is a different problem).

Now do not confuse “all science is subjective” with nihilism or “anything goes”. There is general agreement within families of sciences about good practice, and even many never and always rules. For example, you should not just assemble an arbitrary group of people and make up numbers, with nary a calculation in sight, and call those scientific results that can then be used in quantitative assessments. (Well, actually they do do that in public health, but it is an indefensible practice.) But note that “general agreement” is another sociological phenomenon. When a field has evolved good standards, it produces reliable science. We learn, despite the fact that if you peel back the layers there is no bedrock foundation on which the process is built. It all comes down to human action, not some magical rules. The scientific failing of the “public health” field demonstrate what happens when this sociological process breaks down.

A few final observations: The reference in point 6, about statistical significance being an inappropriate consideration in epidemiology requires a deeper explanation than I can cover today. This may strike many non-expert readers as surprising, given that many ostensible experts probably do not even understand the point. I have covered it before. Similarly point 11: If someone makes a claim about a particular study type being generically better (as opposed to “better for answering this particular question because [insert specifics]”), that mostly tells you they have a very rudimentary understanding of epidemiology. I devoted a post to point 10 just a few days ago, and I address point 7 quite frequently. The last sentence of point 4 is particularly important; anyone who describes an epidemiologic statistic (and even more so, a statistic about behavior or preferences) as if it were a universal constant clearly has little understanding of the science.

6 responses to “Sunday Science Lesson: 13 common errors in epidemiology writing

  1. An epidemiologist,I am not.Though I must admit this post can be so easily applied to many other areas,be they scientific or not.It was like having an English lesson on commonsense.It was thoroughly enjoyable.If more writers followed these basic elements,then there will be a lot more clarity in their posts,or papers,that can be backed up by reputable sources.

  2. “Even the best single study cannot create a fact.”

    This was all wonderful reading (as the SSL always is), but this sentence deserves its own special commendation.

  3. The overwhelming majority of studies conducted and funded by Obama’s DHHS (FDA, CDC, NIH, NIDA, etc.) on vapor products, cigars, hookah, nicotine and flavorings violate most of the 13 common errors cited in this excellent article.

    The reason for that is very simple, as DHHS and its funded researchers and institutions know that their research (and especially their comments to the news media about their research) was funded because to promote the FDA’s deeming regulation that would ban virtually all nicotine vapor products, and many/most cigars, pipe tobacco and shisha/hookah products.

    Although DHHS agencies during previous administrations conducted and funded similarly flawed research (but not on vapor products), the number of deeply flawed studies funded by DHHS during the past seven years has dwarfed those from previous administrations (probably because the top priority for irreconcilably conflicted DHHS officials is to impose the FDA’s deeming ban.

    But these deeply flawed studies are not very influential by themselves.
    What causes the real damage are the press releases (especially embargoed press releases) that are issued by DHHS agencies, DHHS funded academic institutions, DHHS funded researchers and/or by journal editors (where the study is published) that further misrepresent the study’s findings, that sensationalize one or two cherry picked factoids to confuse, scare and generate fear mongering headlines (to lobby for DHHS’ anti-THR policy goals, most notably the FDA deeming ban).

  4. Pingback: Sunday Science Lesson: So much of what is wrong with public health, in one short rejection letter | Anti-THR Lies and related topics

  5. Pingback: Economic innumeracy in public health, with an emphasis on tobacco harm reduction | Anti-THR Lies and related topics

  6. Pingback: Feynman vs. Public Health (Rodu vs. Glantz) | Anti-THR Lies and related topics

Leave a comment