Diacetyl in e-cigarettes — what we can really say (not much)

by Carl V Phillips

I received an inquiry about diacetyl from a reporter in the e-cigarette press and though I would share my response more widely. Most readers will know that diacetyl is a food additive that produces a strong buttery flavor, and that it is the most controversial among the several controversial flavoring agents in e-cigarettes. The concern is that high-dose inhalation exposure is believed to produce the horrible acute lung disease, bronchiolitis obliterans, commonly known as “popcorn lung” because the cases have been found in workers in flavored popcorn factories and at least one consumer who was huffing the outgassing from microwaved popcorn. (I say “believed” because all of these victims inhaled a cocktail of chemicals, not just the one molecule, so it is consistent with the evidence that the outcome depends on the multiple exposures and is even possible that diacetyl is not a necessary part of the mix.) Continue reading

Sunday Science Lesson: 13 common errors in epidemiology writing

by Carl V Phillips and Igor Burstyn

[Note: The following is a guide to writing epidemiology papers that Igor distributed to his students last week. He started drafting it and brought me in as a coauthor. While this written as a guide for how to do epidemiology – the reporting part of doing it – it serves equally well as a guide to identifying characteristics of dishonest or incompetent epidemiology writing, which will be of more interest to most readers. Below that are my elaborations on some of the points; Igor reviewed those to make sure there was nothing he clearly disagreed with, but that part is mine alone. –CVP]

Commonly committed errors by epidemiologist and public health professionals that you must avoid in your own writing and thinking in order to practice evidence-based public health: The Unlucky 13

By Igor Burstyn and Carl V Phillips

1. A research study, unless it is a policy analysis, never suggests a policy or intervention. Policy recommendations never follow directly from epidemiologic evidence, and especially not from a single study result. If someone wants to publish your policy op-ed, go for it, but do not treat a research report as an opportunity to air your personal politics.

2. “More research like this is needed” is never a valid conclusion of a single paper that does not perform any analysis to predict the practical value of future research. More research will always tell us something, but this is always true so not worth saying. “Different research that specifically does X would help answer a key unanswered question highlighted by the present research” is potentially useful, but requires some analysis and should be invoked only if there is nothing more interesting or useful to say about the research itself.

3. Conclusory statements (whether in the conclusions section or elsewhere in the paper) must be about the results and their discussion. If concluding statements could have been written before research is conducted, they are inappropriate. If concluding statements are based on something that is not in the analysis (e.g., normative opinions about what a better world would look like), they must be inappropriate. If concluding statements from a research report include the word “should”, they are probably inappropriate.

4. Citing peer-reviewed articles as if they were facts is wrong: All peer-reviewed papers contain errors and uncritical acceptance of what others concluded is equivalent to trusting tabloids. Even the best single study cannot create a fact. Write “A study of X by Smith et al. found that Y doubled the risk of Z” not “Y doubles the risk of Z”. And make sure that “X” includes a statement of the exposure, outcome, and population – there are no universal constants in epidemiology.

5. Never cite any claim from a paper’s introduction, or from its conclusory statements that violate point 3. If you want to just assert the claim yourself, that might be valid, but pretending that someone else’s mere assertion makes it more valid is disingenuous.

6. Avoid using adjectives unless they are clearly defined. A result is never generically “important” (it might be important for answering some particular question). Saying “major risk factor” has no clear meaning. In particular, avoid using the word “significant” either in reference to a hypothesis test (which is inappropriate epidemiologic methodology) or in its common-language sense (because that can be mistaken for being about a hypothesis test).

7. Avoid using colloquial terms that have semi-scientific meanings, especially inflammatory ones (e.g., toxin, addiction, safe, precautionary principle), unless you give them a clear scientific definition.

8. Do not hide specific concerns behind generalities. If a statement is not specific to the problem at hand, do not make it, or make it problem-specific. For example, it is meaningless to write in an epidemiologic paper that “limitations include confounding and measurement error”, without elaborating on specifics. You should instead explain “confounding may have arisen because of [insert evidence]”, etc.

9. Introductions should provide the information that the audience for your paper needs to know to make sense of your research, and nothing more. If a statement is common knowledge for anyone familiar with your subject (e.g., “smoking is bad for you”), leave it out. If a statement is about the politics surrounding an issue, and you are writing an epidemiology report, leave it out. If there is previous research that is useful for understanding yours, put it in (and not just a few cherrypicked examples – cover or summarize all of it).

10. Your title, abstract, and all other statements should refer to what you actually studied; if you were trying to proxy for something else, explain that, but do not imply that you actually observed that. For example, if you studied the effects of eating selenium-rich foods, refer to the “effects of eating selenium-rich foods”, not the “effects of selenium”.

11. Never judge a study based on its design per se but instead think about whether a design adopted is appropriate for answering the question of interest. There is no “hierarchy of study types” in epidemiology (e.g. there are cases where ecological study is superior to randomized controlled trial).

12. Unless you are analyzing the sociology of science, never discuss who did a study (e.g. what conflicts of interest you perceive or read about) because this leads into a trap of identity politics that has no place in science. Focus on quality of a particular piece of work not the authors’ affiliation/politics/funding.

13. Even as science seeks truth, it is still merely a human activity striving to reduce the range of plausible beliefs (read more on “subjective probability”, if interested); realize this about others’ work and do not try to imply otherwise about your own work. It is acceptable to present an original thought or analysis that requires no references to prior work. Indeed, if you are saying something because it was an original thought, searching for where someone already said it just to have a citation for it is misleading and deprives you of a well-deserved credit. Likewise, write in active voice (e.g. “I/We/Kim et al. conducted a survey of …” rather than “A survey was conducted …”) because it is the work of identifiable fallible humans, not the product of anonymous and infallible Platonic science.

——

Additional observations by CVP on some of the points:

Point 3 (including point 1, which is a particularly common and harmful special case of 3), along with point 9, represent the key problem with public health “research” in terms of the harm it does in society. Expert readers who are trying to learn something from an epidemiology study simply do not read the conclusions or introductions of papers. They just assume, safely, that these will have negative scientific value. However, inexpert readers often only read the abstract and conclusions, perhaps along with the other fuzzy wordy bits (math is hard!). This would not be a problem if those bits were legitimate analyses of the background (for the introduction) and of the research (for the discussion and conclusions). But that is rarely the case. Vanishingly rarely.

Instead, the introduction is a usually high-school-level report on the general topic, usually fraught with broadside-level simplifications and undefended political pronouncements, and the conclusions usually are not related to the actual study. As specifically noted in the guide, concluding statements to an epidemiology research report should never include any policy suggestions because there is no analysis of policy. There is no assessment of whether a particular policy or any conceivable policy would further some goal, let alone of all the impacts of the intervention. More important, there is not any assessment of the ethics or political philosophy of the particular goal, so there is no basis for normative statements. Having the technical skill to collect and analyze data conveys no special authority to assert how the world should be. None. NONE!!! (I just cannot say that emphatically enough.)

Indeed, the entire FDA e-cigarette “deeming” regulation (as drafted, and presumably also the secret final version) can be seen as a perfect example of these problems. It basically says: “Here are some [dubious] observations about characteristics of the world. Therefore we should implement the proposed policy.” As with a typical research paper, they do not fill in any of the steps in between. Why do the supposed characteristics of the world warrant any intervention? Why should we believe the policy will change those characteristics? What other effects will the particular intervention have? How do the supposed benefits compare to the costs? Literally none of these are considered. How can regulators get away with that? Because they are acting in a realm where policy recommendations are always made without considering any of that.

One technical note about point 3, the last sentence: That is phrased equivocally because a “should” statement can be about the actual research. E.g., “These results show that anyone who wishes to make claims about the effectiveness of a smoking cessation method should try to account for confounding and try to assess the inevitable residual confounding.” (Note that I do not actually know a paper that includes that conclusion, which is really too bad, but it could be a conclusion of our second-order preferences paper.) But it is the rare “should” statement that is actually about the implications of the research, rather than some unexamined political preference.

Those points then extend into point 5 (point 4 is more of a corollary to what comes below). Because there is basically no honesty control in public health or quality control in public health journals, unsupported conclusions are frequently cited and thereby turned into “facts” by repetition. It is never safe to assume that any general observation or conclusory claim that is cited to a public health paper is actually supported by the substance of that paper. In my experience this is only true about 1/4 of the time.

Point 13, about the subjective nature of science, is subtle and only touches of the surface of deep philosophical, historical, and sociological thinking. But it may be among the most important for protecting non-experts from the harms caused by dishonest (or just plain clueless) researchers. Scientific inquiry is a series of (human) decisions intermixed with a series of (human) assessments. For many mechanical actions – reading a thermometer or doing a particular regression – we have some very tightly drawn methods that (almost) exactly define an action and constrain subjectivity. But for many others, it is all about choices. Which variables were included in the regression, and in what functional form? That is a human choice. (Ideally, if there are multiple sensible candidates for those, they should all be tried, and all the results reported. Failure to do that is a candidate for the intermediate-level guide to common errors, rather than this basic one. In any case, which of the many results to highlight is still a choice.) Any author who tries to imply that their choices were other than choices — or worse that there were no other choices available — is trying to trick the reader.

Consider the discussion of meta-analysis from a week ago. The authors of the original junk meta-analysis, in their various writings, aggressively tried to trick readers in this manner. But many of the critics did no better, trying to claim that there are some (non-existent) bright-line rules of conduct that the original authors violated. The big-picture problem I noted is that there is no scientific value in calculating meta-analysis statistics in this situation, and rather obvious costs, and that ever trying to do so was absurd. But set that aside and consider the details: If you are going to do a meta-analysis, the choice of which studies to include is subjective. Should you exclude studies that obviously have huge immortal person-time selection bias problems? Honest researchers would generally agree (note: a human process) that you should. But what about studies that apparently have a little bit of such bias? Similarly, supposedly bright-line rules require subjective interpretation: Typical lists of rules for meta-analysis say that we should not include any study that does not have a comparison group. Ok, fine, but what if the study subjects are very representative of a well-defined population, and the authors use knowledge about that population as the comparison? What if it the subjects are less representative of that population, but the researchers still make the comparison? What if the researchers ostensibly included a comparison population in the study, but the two populations are too different to legitimately compare?

Similar confusion — naïve beliefs that standard approaches are not subjective choices, or that merely stating a standard creates a non-subjective bright line — can be found everywhere you look in and around epidemiology. If you have ever seen someone invoke “causal criteria”, it is undoubtedly an example of this. Those lists are not criteria, but merely suggestions for points to (subjectively) consider when assessing causation; they contain no rules. Use of logistic regression is the default method for generating statistics in epidemiology, but the assumptions it embeds are not necessarily reasonable and its use is a choice (perhaps driven by the researchers not knowing how to do anything else, but that is a different problem).

Now do not confuse “all science is subjective” with nihilism or “anything goes”. There is general agreement within families of sciences about good practice, and even many never and always rules. For example, you should not just assemble an arbitrary group of people and make up numbers, with nary a calculation in sight, and call those scientific results that can then be used in quantitative assessments. (Well, actually they do do that in public health, but it is an indefensible practice.) But note that “general agreement” is another sociological phenomenon. When a field has evolved good standards, it produces reliable science. We learn, despite the fact that if you peel back the layers there is no bedrock foundation on which the process is built. It all comes down to human action, not some magical rules. The scientific failing of the “public health” field demonstrate what happens when this sociological process breaks down.

A few final observations: The reference in point 6, about statistical significance being an inappropriate consideration in epidemiology requires a deeper explanation than I can cover today. This may strike many non-expert readers as surprising, given that many ostensible experts probably do not even understand the point. I have covered it before. Similarly point 11: If someone makes a claim about a particular study type being generically better (as opposed to “better for answering this particular question because [insert specifics]”), that mostly tells you they have a very rudimentary understanding of epidemiology. I devoted a post to point 10 just a few days ago, and I address point 7 quite frequently. The last sentence of point 4 is particularly important; anyone who describes an epidemiologic statistic (and even more so, a statistic about behavior or preferences) as if it were a universal constant clearly has little understanding of the science.

Optional reading

by Carl V Phillips

Readers, I will probably not be posting much this week (as evidenced) or next. For those who miss antiTHRlies, might I suggest reading the discussions in the comments from the previous three posts, which have several posts worth of material in them from me and others. Especially this thread.

And I will post this tweet (which is based on a research result I have discussed before) to further explain why I take the approach I do:

https://twitter.com/michaelshermer/status/690192979131252736

Sunday Science Lesson: What is “meta-analysis”? (and why was Glantz’s inherently junk?)

by Carl V Phillips

The recent controversy (see previous two posts), about Stanton Glantz’s “meta-analysis” that ostensibly showed — counter to actual reality — that e-cigarette users are less likely to quit smoking than other smokers, has left some readers wanting to better understand what this “meta-analysis” thing is, and why (as I noted in the first of the above two links) Glantz’s use of it was inherently junk science. Continue reading

Glantz responds to his (other) critics, helping make my point

by Carl V Phillips

Yesterday, I explained what was fundamentally wrong with Stanton Glantz’s new “meta-analysis” paper, beginning with parody and ending with a lament about the approach of his critics who are within public health. Glantz posted a rebuttal to the press release from those critics on his blog, which does a really nice job of helping me make some of my points. I look forward to his attempt to rebut my critique (hahaha — like he would dare), which would undoubtedly help me even more.

Glantz pretty well sums it up with:

The methods and interpretations in our paper follow standard statistical methods for analyzing and interpreting data.

Continue reading

The bright side of new Glantz “meta-analysis”: at least he left aerospace engineering

by Carl V Phillips

Stanton Glantz is at it again, publishing utter drivel. Sorry, that should be taxpayer-funded utter drivel. The journal version is here and his previous version on his blog here. I decided to rewrite the abstract, imagining that Glantz had stayed in the field he apparently trained in, aerospace/mechanical engineering. (For those who do not get the jokes, read on — I explain in the analysis. Clive Bates already explained much of this, but I am distilling it down the most essential problems and trying to explain them so the reasons for them are apparent and this is not just a battle of assertions.) Continue reading

“Whatever happened to be measured” is not the same as exposure+outcome of interest

by Carl V Phillips

“Public health” research has countless failure modes: not understanding human motivations and other economic innumeracy; poor epidemiology methods that ignore advances since 1980; cherrypicking or just lying about evidence; childish faith in the journal review process; etc. But among the worst is drawing conclusions as if whatever happened to be measured in a study — often a rough proxy for one of many aspects of the phenomenon of interest — is a measure of the phenomenon of interest. (I know that sentence is a little dense; read on and it will become clear.) Continue reading

The key fact about ecig junk science: “public health” is a dishonest discipline

by Carl V Phillips

The latest kerfuffle around e-cigarette junk science comes from this toxicology study or, more precisely, this press release that is vaguely related to the study. Basically, a San Diego toxicology research group bathed cells in a very high dose of the liquids that come out of an e-cigarette, and eventually there were detectable changes in the cells. That is really all you need to know about the study’s actual results. (If you want more background, see Clive Bates’s post.) Contrived experiments like this provide nothing more than a bit of vague information that might someday lead to insight about the real world, though probably will not, and so might be worth exploring more using less ham-handed methods. That is all the information this type of research ever provides. No worldly conclusions are possible. It is vague basic science research that even at its best merely points the way for further research. Continue reading