Category Archives: Lies

Regular entries for this blog – bits of the catalog of lies.

Sunday Science Lesson: 13 common errors in epidemiology writing

by Carl V Phillips and Igor Burstyn

[Note: The following is a guide to writing epidemiology papers that Igor distributed to his students last week. He started drafting it and brought me in as a coauthor. While this written as a guide for how to do epidemiology – the reporting part of doing it – it serves equally well as a guide to identifying characteristics of dishonest or incompetent epidemiology writing, which will be of more interest to most readers. Below that are my elaborations on some of the points; Igor reviewed those to make sure there was nothing he clearly disagreed with, but that part is mine alone. –CVP]

Commonly committed errors by epidemiologist and public health professionals that you must avoid in your own writing and thinking in order to practice evidence-based public health: The Unlucky 13

By Igor Burstyn and Carl V Phillips

1. A research study, unless it is a policy analysis, never suggests a policy or intervention. Policy recommendations never follow directly from epidemiologic evidence, and especially not from a single study result. If someone wants to publish your policy op-ed, go for it, but do not treat a research report as an opportunity to air your personal politics.

2. “More research like this is needed” is never a valid conclusion of a single paper that does not perform any analysis to predict the practical value of future research. More research will always tell us something, but this is always true so not worth saying. “Different research that specifically does X would help answer a key unanswered question highlighted by the present research” is potentially useful, but requires some analysis and should be invoked only if there is nothing more interesting or useful to say about the research itself.

3. Conclusory statements (whether in the conclusions section or elsewhere in the paper) must be about the results and their discussion. If concluding statements could have been written before research is conducted, they are inappropriate. If concluding statements are based on something that is not in the analysis (e.g., normative opinions about what a better world would look like), they must be inappropriate. If concluding statements from a research report include the word “should”, they are probably inappropriate.

4. Citing peer-reviewed articles as if they were facts is wrong: All peer-reviewed papers contain errors and uncritical acceptance of what others concluded is equivalent to trusting tabloids. Even the best single study cannot create a fact. Write “A study of X by Smith et al. found that Y doubled the risk of Z” not “Y doubles the risk of Z”. And make sure that “X” includes a statement of the exposure, outcome, and population – there are no universal constants in epidemiology.

5. Never cite any claim from a paper’s introduction, or from its conclusory statements that violate point 3. If you want to just assert the claim yourself, that might be valid, but pretending that someone else’s mere assertion makes it more valid is disingenuous.

6. Avoid using adjectives unless they are clearly defined. A result is never generically “important” (it might be important for answering some particular question). Saying “major risk factor” has no clear meaning. In particular, avoid using the word “significant” either in reference to a hypothesis test (which is inappropriate epidemiologic methodology) or in its common-language sense (because that can be mistaken for being about a hypothesis test).

7. Avoid using colloquial terms that have semi-scientific meanings, especially inflammatory ones (e.g., toxin, addiction, safe, precautionary principle), unless you give them a clear scientific definition.

8. Do not hide specific concerns behind generalities. If a statement is not specific to the problem at hand, do not make it, or make it problem-specific. For example, it is meaningless to write in an epidemiologic paper that “limitations include confounding and measurement error”, without elaborating on specifics. You should instead explain “confounding may have arisen because of [insert evidence]”, etc.

9. Introductions should provide the information that the audience for your paper needs to know to make sense of your research, and nothing more. If a statement is common knowledge for anyone familiar with your subject (e.g., “smoking is bad for you”), leave it out. If a statement is about the politics surrounding an issue, and you are writing an epidemiology report, leave it out. If there is previous research that is useful for understanding yours, put it in (and not just a few cherrypicked examples – cover or summarize all of it).

10. Your title, abstract, and all other statements should refer to what you actually studied; if you were trying to proxy for something else, explain that, but do not imply that you actually observed that. For example, if you studied the effects of eating selenium-rich foods, refer to the “effects of eating selenium-rich foods”, not the “effects of selenium”.

11. Never judge a study based on its design per se but instead think about whether a design adopted is appropriate for answering the question of interest. There is no “hierarchy of study types” in epidemiology (e.g. there are cases where ecological study is superior to randomized controlled trial).

12. Unless you are analyzing the sociology of science, never discuss who did a study (e.g. what conflicts of interest you perceive or read about) because this leads into a trap of identity politics that has no place in science. Focus on quality of a particular piece of work not the authors’ affiliation/politics/funding.

13. Even as science seeks truth, it is still merely a human activity striving to reduce the range of plausible beliefs (read more on “subjective probability”, if interested); realize this about others’ work and do not try to imply otherwise about your own work. It is acceptable to present an original thought or analysis that requires no references to prior work. Indeed, if you are saying something because it was an original thought, searching for where someone already said it just to have a citation for it is misleading and deprives you of a well-deserved credit. Likewise, write in active voice (e.g. “I/We/Kim et al. conducted a survey of …” rather than “A survey was conducted …”) because it is the work of identifiable fallible humans, not the product of anonymous and infallible Platonic science.


Additional observations by CVP on some of the points:

Point 3 (including point 1, which is a particularly common and harmful special case of 3), along with point 9, represent the key problem with public health “research” in terms of the harm it does in society. Expert readers who are trying to learn something from an epidemiology study simply do not read the conclusions or introductions of papers. They just assume, safely, that these will have negative scientific value. However, inexpert readers often only read the abstract and conclusions, perhaps along with the other fuzzy wordy bits (math is hard!). This would not be a problem if those bits were legitimate analyses of the background (for the introduction) and of the research (for the discussion and conclusions). But that is rarely the case. Vanishingly rarely.

Instead, the introduction is a usually high-school-level report on the general topic, usually fraught with broadside-level simplifications and undefended political pronouncements, and the conclusions usually are not related to the actual study. As specifically noted in the guide, concluding statements to an epidemiology research report should never include any policy suggestions because there is no analysis of policy. There is no assessment of whether a particular policy or any conceivable policy would further some goal, let alone of all the impacts of the intervention. More important, there is not any assessment of the ethics or political philosophy of the particular goal, so there is no basis for normative statements. Having the technical skill to collect and analyze data conveys no special authority to assert how the world should be. None. NONE!!! (I just cannot say that emphatically enough.)

Indeed, the entire FDA e-cigarette “deeming” regulation (as drafted, and presumably also the secret final version) can be seen as a perfect example of these problems. It basically says: “Here are some [dubious] observations about characteristics of the world. Therefore we should implement the proposed policy.” As with a typical research paper, they do not fill in any of the steps in between. Why do the supposed characteristics of the world warrant any intervention? Why should we believe the policy will change those characteristics? What other effects will the particular intervention have? How do the supposed benefits compare to the costs? Literally none of these are considered. How can regulators get away with that? Because they are acting in a realm where policy recommendations are always made without considering any of that.

One technical note about point 3, the last sentence: That is phrased equivocally because a “should” statement can be about the actual research. E.g., “These results show that anyone who wishes to make claims about the effectiveness of a smoking cessation method should try to account for confounding and try to assess the inevitable residual confounding.” (Note that I do not actually know a paper that includes that conclusion, which is really too bad, but it could be a conclusion of our second-order preferences paper.) But it is the rare “should” statement that is actually about the implications of the research, rather than some unexamined political preference.

Those points then extend into point 5 (point 4 is more of a corollary to what comes below). Because there is basically no honesty control in public health or quality control in public health journals, unsupported conclusions are frequently cited and thereby turned into “facts” by repetition. It is never safe to assume that any general observation or conclusory claim that is cited to a public health paper is actually supported by the substance of that paper. In my experience this is only true about 1/4 of the time.

Point 13, about the subjective nature of science, is subtle and only touches of the surface of deep philosophical, historical, and sociological thinking. But it may be among the most important for protecting non-experts from the harms caused by dishonest (or just plain clueless) researchers. Scientific inquiry is a series of (human) decisions intermixed with a series of (human) assessments. For many mechanical actions – reading a thermometer or doing a particular regression – we have some very tightly drawn methods that (almost) exactly define an action and constrain subjectivity. But for many others, it is all about choices. Which variables were included in the regression, and in what functional form? That is a human choice. (Ideally, if there are multiple sensible candidates for those, they should all be tried, and all the results reported. Failure to do that is a candidate for the intermediate-level guide to common errors, rather than this basic one. In any case, which of the many results to highlight is still a choice.) Any author who tries to imply that their choices were other than choices — or worse that there were no other choices available — is trying to trick the reader.

Consider the discussion of meta-analysis from a week ago. The authors of the original junk meta-analysis, in their various writings, aggressively tried to trick readers in this manner. But many of the critics did no better, trying to claim that there are some (non-existent) bright-line rules of conduct that the original authors violated. The big-picture problem I noted is that there is no scientific value in calculating meta-analysis statistics in this situation, and rather obvious costs, and that ever trying to do so was absurd. But set that aside and consider the details: If you are going to do a meta-analysis, the choice of which studies to include is subjective. Should you exclude studies that obviously have huge immortal person-time selection bias problems? Honest researchers would generally agree (note: a human process) that you should. But what about studies that apparently have a little bit of such bias? Similarly, supposedly bright-line rules require subjective interpretation: Typical lists of rules for meta-analysis say that we should not include any study that does not have a comparison group. Ok, fine, but what if the study subjects are very representative of a well-defined population, and the authors use knowledge about that population as the comparison? What if it the subjects are less representative of that population, but the researchers still make the comparison? What if the researchers ostensibly included a comparison population in the study, but the two populations are too different to legitimately compare?

Similar confusion — naïve beliefs that standard approaches are not subjective choices, or that merely stating a standard creates a non-subjective bright line — can be found everywhere you look in and around epidemiology. If you have ever seen someone invoke “causal criteria”, it is undoubtedly an example of this. Those lists are not criteria, but merely suggestions for points to (subjectively) consider when assessing causation; they contain no rules. Use of logistic regression is the default method for generating statistics in epidemiology, but the assumptions it embeds are not necessarily reasonable and its use is a choice (perhaps driven by the researchers not knowing how to do anything else, but that is a different problem).

Now do not confuse “all science is subjective” with nihilism or “anything goes”. There is general agreement within families of sciences about good practice, and even many never and always rules. For example, you should not just assemble an arbitrary group of people and make up numbers, with nary a calculation in sight, and call those scientific results that can then be used in quantitative assessments. (Well, actually they do do that in public health, but it is an indefensible practice.) But note that “general agreement” is another sociological phenomenon. When a field has evolved good standards, it produces reliable science. We learn, despite the fact that if you peel back the layers there is no bedrock foundation on which the process is built. It all comes down to human action, not some magical rules. The scientific failing of the “public health” field demonstrate what happens when this sociological process breaks down.

A few final observations: The reference in point 6, about statistical significance being an inappropriate consideration in epidemiology requires a deeper explanation than I can cover today. This may strike many non-expert readers as surprising, given that many ostensible experts probably do not even understand the point. I have covered it before. Similarly point 11: If someone makes a claim about a particular study type being generically better (as opposed to “better for answering this particular question because [insert specifics]”), that mostly tells you they have a very rudimentary understanding of epidemiology. I devoted a post to point 10 just a few days ago, and I address point 7 quite frequently. The last sentence of point 4 is particularly important; anyone who describes an epidemiologic statistic (and even more so, a statistic about behavior or preferences) as if it were a universal constant clearly has little understanding of the science.

Glantz responds to his (other) critics, helping make my point

by Carl V Phillips

Yesterday, I explained what was fundamentally wrong with Stanton Glantz’s new “meta-analysis” paper, beginning with parody and ending with a lament about the approach of his critics who are within public health. Glantz posted a rebuttal to the press release from those critics on his blog, which does a really nice job of helping me make some of my points. I look forward to his attempt to rebut my critique (hahaha — like he would dare), which would undoubtedly help me even more.

Glantz pretty well sums it up with:

The methods and interpretations in our paper follow standard statistical methods for analyzing and interpreting data.

Continue reading

The bright side of new Glantz “meta-analysis”: at least he left aerospace engineering

by Carl V Phillips

Stanton Glantz is at it again, publishing utter drivel. Sorry, that should be taxpayer-funded utter drivel. The journal version is here and his previous version on his blog here. I decided to rewrite the abstract, imagining that Glantz had stayed in the field he apparently trained in, aerospace/mechanical engineering. (For those who do not get the jokes, read on — I explain in the analysis. Clive Bates already explained much of this, but I am distilling it down the most essential problems and trying to explain them so the reasons for them are apparent and this is not just a battle of assertions.) Continue reading

The key fact about ecig junk science: “public health” is a dishonest discipline

by Carl V Phillips

The latest kerfuffle around e-cigarette junk science comes from this toxicology study or, more precisely, this press release that is vaguely related to the study. Basically, a San Diego toxicology research group bathed cells in a very high dose of the liquids that come out of an e-cigarette, and eventually there were detectable changes in the cells. That is really all you need to know about the study’s actual results. (If you want more background, see Clive Bates’s post.) Contrived experiments like this provide nothing more than a bit of vague information that might someday lead to insight about the real world, though probably will not, and so might be worth exploring more using less ham-handed methods. That is all the information this type of research ever provides. No worldly conclusions are possible. It is vague basic science research that even at its best merely points the way for further research. Continue reading

CASAA comments on FDA proposed “intended use” regulation of ecigs

by Carl V Phillips

CASAA recently submitted its comment about an FDA proposed rule, something they presumably intend to implement as soon as they “deem” e-cigarettes (and thus before they ban ~99.99% of them outright). The rule basically attempts to claw back much of what FDA was declared to not have the authority to do in Judge Leon’s landmark ruling. You can read more and link through to the FDA proposal at our Call To Action (which you can still respond to at the time of this writing, through Wednesday).

The short version is that not only would be e-cigarette manufacturers and vendors be prohibited from communicating the clearly accurate message that e-cigarettes are a low-risk alternative to cigarettes, as they are already prevented from doing, to the great detriment of real public health. But under this rule they would not even be able to suggest that e-cigarettes merely are a substitute for cigarettes. If they do, the FDA wishes to assert, this would be a disease treatment claim and thus subject their products to pharmaceutical regulation (which really means they would be banned immediately).

Um, yeah. Our submitted comment is posted at the main CASAA blog. (It is short enough to read in that format, unlike the last one, but if you prefer the formatted version it is here (pdf).) The highlights: Continue reading

Utter innumeracy: six impossible claims about tobacco most “public health” people believe before breakfast

by Carl V Phillips

As anyone with a modest understanding of the science knows, tobacco controllers and other “public health” people make countless statements that are utterly false. The tobacco control industry depends on making claims that flatly contradict what the science shows. But there is a special class of claims that are not wrong just because they contradict particular empirical evidence; rather, everyone should know they are wrong based merely on understanding some basics of how the world works. Many such claims are constantly repeated as if they were self-evidently true even though they are actually self-evidently false. I was having trouble defining the category until I recalled the quote from Alice in Wonderland alluded to in the title. Continue reading

Peer review in “public health” — Tobacco Control journal own-goal edition

by Carl V Phillips

Clive Bates prods me to write something about this editorial in the journal/political magazine/comic book, Tobacco Control, by Editor-in-Chief Ruth Malone, honoring their “top reviewers”. (Oh, wait, it is a British publishing house, so that should be: “honouring their toup reviewers”.) You can view it yourself, because it is open access, unlike their regular articles which they hide behind a paywall to inhibit real peer review (very few libraries subscribe to Tobacco Control, to their great credit). They really should have hidden this one from scrutiny too. Continue reading

Sweden pulls up the drawbridge behind them (outsource)

by Carl V Phillips

Sorry for the blog silence. Busy. It will continue for a few days. In the meantime, I recommend reading this post by Erik (Atakan) Befrits about Sweden caving to WHO pressure to try to use warning labels to try to scare people away from THR. It contains some important perspective that many of my readers my not normally get. (Obvious disclaimer: I don’t necessarily agree with every word of the post.)

The crux:

Make no mistake about this: This new law on snus will have exactly ZERO effect here in Sweden where ZERO people die from using snus. It will however have devastating effects for the health of smokers and users of toxic smokeless formulations in THE 193 OTHER recognized countries in the world.