Category Archives: Lies

Regular entries for this blog – bits of the catalog of lies.

My new paper: Understanding the basic economics of tobacco harm reduction

by Carl V Phillips

In case you missed it, my new IEA paper, Understanding the basic economics of tobacco harm reduction, is available here. You should go read it. The summaries do not do it justice. (Not really joking there — the summaries have picked up on one particular conclusion, but the value of the paper is laying out how to think about the whole issue.) I am posting this here primarily to create an opportunity for comments, since that is not available at the original.

Speaking of summaries, you can find this one at CityAM, which was kind enough to also run my op-ed that was based on the paper. (Needless to say, I did not choose the headline nor the link in the first paragraph — can you imagine me citing the RCP report as if they were the source of that information???)

An old letter to the editor about Glantz’s ad hominems

by Carl V Phillips

I am going through some of my old files of unpublished (or, more often, only obscurely published) material, and though I would post some of it. While I suspect you will find this a poor substitute for my usual posts, I hope there is some interest (and implicit lessons for those who think any of this is new), and posting a few of these will keep this blog going for a few weeks.

This one, from 2009, was written as a letter to the editor (rejected by the journal — surprise!) by my team at the University of Alberta School of Public Health. It was about this rant, “Tobacco Industry Efforts to Undermine Policy-Relevant Research” by Stanton Glantz and one of his deluded minions, Anne Landman, published in the American Journal of Public Health (non-paywalled version if for some unfathomable reason you actually want to read it). The authorship of our letter was Catherine M Nissen, Karyn K Heavner, me, and Lisa Cockburn. 

The letter read:


Landman and Glantz’s paper in the January 2009 issue of AJPH is a litany of ad hominem attacks on those who have been critical of Glantz’s work, with no actual defense of that work. This paper seems to be based on the assumption that a researcher’s criticism should be dismissed if it is possible to identify funding that might have motivated the criticism. However, for this to be true it must be that: (1) there is such funding, (2) there is reason to believe the funding motivated the criticism, and (3) the criticism does not stand on its own merit. The authors devote a full 10 pages to (1), but largely ignore the key logical connection, (2). This is critical because if we step back and look at the motives of funders (rather than just using funding as an excuse for ignoring our opponents), we see that researchers tend to get funding from parties that are interested in their research, even if the researcher did not seek funding from that party (Marlow, 2008).

Most important, the authors completely ignore (3). Biased motives (whether related to funding or not) can certainly make us nervous that authors have cited references selectively, or in an epidemiology study have chopped away years of data to exaggerate an estimated association, or have otherwise hidden something. [Note: In case it is not obvious, these are subtle references to Glantz’s own methods.] But a transparent valid critique is obviously not impeached by claims of bias. The article’s only defense against the allegation that Glantz’s reporting “was uncritical, unsupportable and unbalanced” is to point to supposed “conflicts of interest” of the critics. If Glantz had an argument for why his estimates are superior to the many competing estimates or why the critiques were wrong, this would seem a convenient forum for this defense, but no such argument appears. Rather, throughout this paper it seems the reader is expected to assume that Glantz’s research is infallible, and that any critiques are unfounded. This is never the case with any research conducted, and surely the authors must be aware that any published work is open to criticism.

Indeed, presumably there are those who disagree with Glantz’s estimates who conform to his personal opinions about who a researcher should be taking funding from, and yet we see no response to them. For example, even official statistics that accept the orthodoxy about second hand smoke include a wide range of estimates (e.g., the California Environmental Protection Agency (2005) estimated it causes 22,700-69,600 cardiac deaths per year), and much of the range implies Glantz’s estimates are wrong. But in a classic example of “a-cell epidemiology” [Note: This is a metaphoric reference to the 2×2 table of exposure status vs. disease status; the cell counting individuals with the exposure and the disease is usually labeled “a”.], Glantz has collected exposed cases to report, but tells us nothing of his critics who are not conveniently vulnerable to ad hominem attacks.

It is quite remarkable that given world history, and not least the recent years in the U.S., people seem willing to accept government as unbiased and its claims as infallible. Governments are often guilty of manipulating research (Kempner, 2008). A search of the Computer Retrieval of Information on Scientific Projects database ( on the National Institute of Health’s website found that one of the aims of the NCI grant that funded Landman and Glantz’s research (specified in their acknowledgement statement) is to “Continue to describe and assess the tobacco industry’s evolving strategies to influence the conduct, interpretation, and dissemination of science and how the industry has used these strategies to oppose tobacco control policies.” Cleary this grant governs not only the topic but also the conclusions of the research, a priori concluding that the tobacco industry continues to manipulate research, and motivating the researcher to write papers that support this. Surely it is difficult to imagine a clearer conflict of interest than, “I took funding that required me to try to reach a particular conclusion.”

The comment “[t]hese efforts can influence the policymaking process by silencing voices critical of tobacco industry interests and discouraging other scientists from doing research that may expose them to industry attacks” is clearly ironic. It seems to describe exactly what the authors are attempting to do to Glantz’s critics, discredit and silence them, to say nothing of Glantz’s concerted campaign to destroy the career of one researcher whose major study produced a result Glantz did not like (Enstrom, 2007; Phillips, 2008). If Glantz were really interested in improving science and public health, rather than defending what he considers to be his personal turf, he would spend his time explaining why his numbers are better. Instead, he spends his time outlining (and then not even responding to) the history of critiques of his work, offering only his personal opinions about the affiliations of his critics in his defense.


1. Landman, A., and Glantz, Stanton A. Tobacco Industry Efforts to Undermine Policy-Relevant Research. American Journal of Public Health. January 2009; 99(1):1-14.

2. Marlow, ML. Honestly, Who Else Would Fund Such Research? Reflections of a Non-Smoking Scholar. Econ Journal Watch. 2008 May; 5(2):240-268.

3. California Environmental Protection Agency. Identification of Environmental Tobacco Smoke as a Toxic Air Contaminant. Executive Summary. June 2005.

4. Kempner, J. The Chilling Effect: How Do Researchers React to Controversy? PLoS Medicine 2008; 5(11):e222.

5. Enstrom, JE. Defending legitimate epidemiologic research: combating Lysenko pseudoscience. Epidemiologic Perspectives & Innovations 2007, 4:11.

6. Phillips, CV. Commentary: Lack of scientific influences on epidemiology. International Journal of Epidemiology. 2008 Feb;37(1):59-64; discussion 65-8.

7. Libin, K. Whither the campus radical? Academic Freedom. National Post. October 1, 2007.


Our conflict of interest statement submitted with this was — as has long been my practice — an actual recounting of our COIs, unlike anything Glantz or anyone in tobacco control would ever write. It read:

The authors have experienced a history of attacks by those, like Glantz, who wish to silence heterodox voices in the area of tobacco research; our attackers have included people inside the academy (particularly the administration of the University of Alberta School of Public Health (National Post, 2007)), though not Glantz or his immediate colleagues as far as we know. The authors are advocates of enlightened policies toward tobacco and nicotine use, and of improving the conduct of epidemiology, which place us in political opposition to Glantz and his colleagues. The authors conduct research on tobacco harm reduction and receive support in the form of a grant to the University of Alberta from U.S. Smokeless Tobacco Company; our research would not be possible if Glantz et al. succeeded in their efforts to intimidate researchers and universities into enforcing their monopoly on funding. Unlike the grant that supported Glantz’s research, our grant places no restrictions on the use of the funds, and certainly does not pre-ordain our conclusions. The grantor is unaware of this letter, and thus had no input or influence on it. Dr. Phillips has consulted for U.S. Smokeless Tobacco Company in the context of product liability litigation and is a member of British American Tobacco’s External Scientific Panel.

Serious ethical concerns about public health research conduct; the case of vape convention air quality measurement

by Carl V Phillips

A recent paper in Tobacco Control (official version; unpaywalled version), “Electronic cigarette use and indoor air quality in a natural setting”, in which Eric K Soule, Sarah F Maloney, Tory R Spindle, Alyssa K Rudy, Marzena M Hiler, and Caroline O Cobb, of Thomas Eissenberg’s FDA-funded shop at Virginia Commonwealth University, reports on the researchers’ surreptitious observations at a vape convention. The research methods employed are extremely troubling to me and many others. Continue reading

New FDA-funded @SDSU research establishes that public health researchers are remarkably dim

by Carl V Phillips

I was not going to post today, but there is so much hilarious chatter about this new press release from San Diego State University, and their FDA-funded “research” on e-cigarettes that I could not resist. This simplistic research about web searches related to e-cigarettes deserves a paragraph-by-paragraph dissection.

Oh, and of course there is a new journal paper that goes with it. But, seriously, who cares? Academic “public health” practice has descended to the point that a journal paper is just an excuse to write an even more misleading press release. It is time to stop pretending otherwise and just peer-review the press release. I am sure if I dissected the paper itself I could identify numerous problems that are not evident from just the press release — that seems to always be the case — but, again, who cares? It is not as if anyone in public health pays any attention to the quality of the science. When the paper is cited, those citing it will effectively just be citing the press release.

It is worth starting with the last bit, to see who shares the “credit” here:

The study was funded by 5R01CA169189-02, RCA173299A, and T32CA009492 from the National Cancer Institute and U.S. Food and Drug Administration Center for Tobacco Products. The content is solely the responsibility of the authors and does not represent the official views of the funders. The funders had no role in the design, conduct, or interpretation of the study nor the preparation, review, or approval of the manuscript. Additional collaborators on this study included: Benjamin Althouse of the Santa Fe Institute; Jon-Patrick Allem of the University of Southern California; Eric C. Leas of the University of California, San Diego; and Mark Dredze of Johns Hopkins University.

About San Diego State University: San Diego State University is a major public research institution that provides transformative experiences, both inside and outside of the classroom…blah, blah, blah….

Ok, on to the silliness, which begins with:

The Oxford Dictionaries selected “vape”–as in, to smoke from an electronic cigarette–as word of the year in 2014. It turns out that Internet users’ search behavior tells a similar story.

Between 2009 and 2015, the number of people in the United States seeking information online about vaping rose dramatically, according to a recent study co-led by San Diego State University Internet health expert John W. Ayers and University of North Carolina tobacco control expert Rebecca S. Williams as a part of the Internet Tobacco Vendors Study.

Yes, that really is the start of the press release. Normally cutesy hooks like this are not worth thinking about, but this is ostensibly a scientific press release, so let’s think like scientists for a moment. What could “tells a similar story” possibly mean? What story? The reader is presented with no story that offers any similarity to anything, let alone to search histories.

If the reader has the knowledge to fill in what the first sentence represents, then this becomes even sillier. The “word of the year” is not a measure of prevalence, or the first derivative of prevalence, of a word’s use. Those play some role, obviously, but the choice is ultimately a decision about what word seems to be a good representative of the changing zeitgeist, as evidenced by the many nominees and winners that are not particularly common. You would think that most scholars already know this, and those who did not know would invest the three minutes’ of research it would take to discover it before invoking the inaccurate analogy. But we are talking about public health activists, not scholars.

Oh, and of course, “to smoke from an electronic cigarette” would be a fairly stupid thing to do. Of course, don’t blame the linguistically careful folks at Oxford Dictionaries for that error; their definition makes clear they know that there is no smoke involved. That error is the fault of the team of crack team of FDA-funded researchers.

Moving on with the study results, we learn the shocking news that searches for a product category increase dramatically when it becomes fairly popular, as compared to when it was barely known. In other news, searches for “greek yogurt”, “is Pluto a planet”, and “Bernie Sanders” increased dramatically between 2009 and 2015.

E-cigarettes and other hand-held vaporizers began appearing on American shelves in the mid-2000s. Since then, they’ve quickly risen in popularity while regulators have been slow to adapt smoking legislation to account for these devices.

It is not clear which “other hand-held vaporizers” are not e-cigarettes, but never mind that. Also, we will just gloss over the odd use of either “mid-” or “shelves” (it should either say “began appearing via internet sales” or “late-2000s”; but, hey, it is just so much to ask to get simple background facts right). Focus instead on the last bit, in which these people who are supposedly issuing a press release about research lead off with an unsupported normative claim (that regulators should adapt smoking legislation to “account for” [sic] e-cigarettes). Notice that they slip this in as if it goes without saying, when it is actually a far more significant claim than anything in the research results they actually report. That, of course, is SOP in “public health”, where research is done primarily as an excuse to express unsupported political opinions.

“Big Tobacco has largely taken over the e-cigarette industry. Alongside unchecked marketing and advertising, e-cigarettes have exploded online,” Ayers said.

So in public health land, “largely taken over” means having less than half the market share (which is itself divided amongst several competing companies), and “unchecked” means “subject to the countless restrictions on all marketing communication, as well as specific restrictions such as not being able to tell the truth about the health risks.” Um, yeah. So that would mean that Apple has largely taken over the smartphone industry, and is free to lie to consumers about their products with impunity.

Internet users’ search history bears this out.

Wait, what? (Note that I am not skipping any text.) How can any data about search history “bear out” any of the claims in the previous sentence? The closest it could come to any of them would be that an increase in searches would probably be associated with that “exploded online” thing, though the latter seems to be a claim about the supply of information while the searches reflect demand for information.

Ayers, Williams, and a team of colleagues from across the country examined search history from Google Trends, which includes statistics on what specific words people searched for, the search term’s popularity relative to all other concurrent searches in a specified time, date and geographic location. From this data, the researchers can find patterns that point to Internet searchers’ apparent preferences and attitudes.

Ok, that is a nice high-school science project or term-paper-level exercise. One can learn something from that. Let’s see if they did….

When they looked at searches related to e-cigarettes starting in 2009, they found a sharply rising trend through 2015 with no end in sight.

I am not sure if that is LOL funny for everyone reading it, or if it is just me. I actually had to pause for a couple of minutes before resuming. As already noted, obviously there is going to be a sharply rising trend from 2009. But it is the “no end in sight” that really got me, at several levels. Why would anyone in their right mind even think to mention that? Internet use is increasing and e-cigarettes are increasing in popularity; what possible end are they even talking about?

Moreover, at a deeper scientific level, we are talking about a social phenomenon that could end, and quite abruptly, whatever the historical trend is. This is not like claiming there is no end in sight for the warming of the planet, which could be based on what we know about atmospheric chemistry, which is not affected by social proclivities that can change abruptly. Social trends can change abruptly. I am sure that the data for searches of “Bernie Sanders” follows a very similar trend to searches for e-cigarettes, but any real social scientist can foresee an end to that trend.

For example, in 2014 there were about 8.5 million e-cigarette-related Google searches. For 2015, their model forecasts an increase in these searchers of about 62-percent. Looking at geographic data, they found that e-cigarette searches have diffused across the nation, suggesting that e-cigarettes have become a widespread cultural phenomenon in every U.S. state. Over the same time period, searches for e-cigarettes far outpaced other “smoking alternatives” such as snus (smokeless tobacco) or nicotine gum or patches.

I am not sure what that first sentence means, but it obviously does not mean what it says. I would guess that readers of this blog alone conducted most of 8.5 million “e-cigarette-related” Google searches in 2014. Obviously researchers can choose to study whatever specific phenomena they want to, narrowing what they are counting up, but they need to say what they are doing. The fact that someone would put out an obviously incorrect number like this and that the press would dutifully report it without thinking it through speaks volumes about what a joke public health discourse has become.

The results about states is equally useful information. I mean, who would have guessed that the primitive tribes of Tennessee and the transcendent life forms in Oregon would have similar internet search behavior to the rest of the country? It is not surprising that the searches exceeded those for other low-risk alternatives (though I have no idea what their scare quotes are supposed to mean — presumably it is innuendo that alternatives to smoking are not really alternatives to smoking). Though I have to wonder if their methodology missed most of the searches for smokeless tobacco, which probably used established brand names rather than the word “snus”.

The researchers published their findings today in the American Journal of Preventive Medicine. (Note: this URL will be active after the embargo lifts)

Oh, look, I don’t have to wonder. I could go read the paper and learn their exact methodology. Just kidding. I have little doubt that I would still not know, given the poor quality of methods reporting in public health research. (Also I would have to go search for it, since the aforementioned URL does not actually exist on that page — the smallest of the many errors that appear.) Since there is really no chance that anything useful will come of that effort, I am skipping it. Anyone who actually bothers to read the paper can use the comments to backfill anything good I might have missed.

What most concerns the researchers, though, is that when people search for e-cigarette information, they’re using search terms like: “best e-cig,” “buy vapes” or “shop vaping.”

Why should anyone care about what most concerns these researchers? They should not. Being able to run simple statistics on Google searches implies absolutely no moral authority to opine about, let alone ability to analyze, what is better for society. (Incidentally, what would most concern me would be if the phrases “buy vapes” or “shop vaping” were more common than grammatically sensible phrases. I assume they actually were not, and that they were intentionally cherrypicked to try to ridicule consumers.)

In any case, in what world would it be at all concerning that most of the searches for a consumer product would be a combination of seeking review information (as “best e-cig” is presumably intended to do) and purchase options? It would certainly not surprise anyone.

“One of the most surprising findings of this study was that searches for where to buy e-cigarettes outpaced searches about health concerns or smoking cessation,” Williams said.

I stand corrected. Let me amend that to, “It certainly would not surprise anyone with a clue about how the world works.”

“Despite what the media and e-cigarette industry might have you believe, there is little research evidence to support the notion that e-cigarettes are safe or an effective tool to help smokers quit. Given that, we think it’s revealing that there were fewer searches about safety and cessation topics than about shopping.” In fact, she said, searches for e-cigarette safety concerns represented less than 1 percent of e-cigarette searches, and this number has declined over the past two years.

Um… what?

Set aside all the usual lies that are embedded in that, about what the evidence shows, the “safe” wordplay, misrepresentation of the predominant message in the media, and misrepresenting what is permitted in marketing. Just skip to the specific claim here, that he seems to think that those phenomena (even if they actually were true) would cause people to do more searches for background information than for product reviews and purchases. Seriously?

Even for a product where most of the background information you would find from a random search was not utter bullshit, as it is in this case, consumers are still going to mostly search regarding purchase plans. People who are seriously interested in that other information develop networks of trusted sources; they would get nowhere doing random searches. Anyone who is shopping for e-cigarettes has already acquired the information that e-cigarettes are worth shopping for, presumably knowing that they are a low-risk alternative to smoking. Why, exactly, would he want to search for that?

Frankly, I would be extremely disturbed to learn that many short-phrase searches about e-cigarettes were seeking scientific information. That would truly be a tragic commentary on people’s understanding of how to learn anything about controversial issues via the internet.

If these “researchers” actually had any expertise in the research they were conducting — which is to say, about consumer online search behavior, not about tobacco politics — we might have gotten some useful information. For example, how do these statistics (which are inevitably very weak, depending on their choices of phrases and how they were coded) compare to those for other products? What is the quality of information that someone would get were she to pursue such a blind search for information? This speaks to a common problem in “public health”, where research follows political interests rather than scientific skills. There is absolutely no reason why someone doing this particular research would need to know anything about e-cigarettes, other than some basic vocabulary. They should, however, know something about real-world consumer behavior.

A linguistic trend also emerged from the study. The term “vaping” has quickly overtaken “e-cigarettes” as the preferred nomenclature in the United States. That’s important for health officials and researchers to recognize, the team noted. Surveillance of smoking trends is done primarily through surveys and questionnaires, and knowing which terms people use can affect the accuracy of this data.

Wow, that is almost useful information. Of course most of us already knew that. We also know that “vaping” is an act whereas “e-cigarettes” are a product, and so they are not really commensurate, which is a rather important distinction for doing those surveys and questionnaires. (“Hey, guys, according to this new study, we should stop asking ‘have you used an e-cigarette’ and start asking ‘have you used a vaping’.”) Those of us who know how to do surveys, of course, already make a point to define what we are talking about, including offering the multiple popular terms if that is an issue.

Also, one of the major weapons anti-smoking advocacy groups have is counter-advertising. In the Internet age, advertisers look for specific keywords to target their advertisements. Knowing that more people use the term “vaping” than “e-cig” helps them be more targeted and effective, Ayers said.

So “counter-advertising” in the interests of “anti-smoking” should make sure to intercept all those searches for “vaping” in order to make sure the anti-vaping propaganda reaches its target audience of people who want to avoid smoking. Yup, they pretty much earned their FDA money with that one.

“Labels do matter,” Ayers said. “When you call it ‘vaping,’ you’re using a brand new word that doesn’t have the same historical baggage as ‘smoking’ or ‘cigarette.’ They’ve relabeled it. Health campaigns need to recognize this so they can keep up.”

“They’ve relabeled it”? Who is this “they”? Oh yeah, I remember, it is the Oxford Dictionary people. The bastards.

Of course what “health campaigns” really need to do to keep up is to learn something about health. They are not going to achieve that by doing random internet searches, by the way. Nor by reading press releases or papers by the tobacco control industry’s pet academics.

Sunday Science Lesson: 13 common errors in epidemiology writing

by Carl V Phillips and Igor Burstyn

[Note: The following is a guide to writing epidemiology papers that Igor distributed to his students last week. He started drafting it and brought me in as a coauthor. While this written as a guide for how to do epidemiology – the reporting part of doing it – it serves equally well as a guide to identifying characteristics of dishonest or incompetent epidemiology writing, which will be of more interest to most readers. Below that are my elaborations on some of the points; Igor reviewed those to make sure there was nothing he clearly disagreed with, but that part is mine alone. –CVP]

Commonly committed errors by epidemiologist and public health professionals that you must avoid in your own writing and thinking in order to practice evidence-based public health: The Unlucky 13

By Igor Burstyn and Carl V Phillips

1. A research study, unless it is a policy analysis, never suggests a policy or intervention. Policy recommendations never follow directly from epidemiologic evidence, and especially not from a single study result. If someone wants to publish your policy op-ed, go for it, but do not treat a research report as an opportunity to air your personal politics.

2. “More research like this is needed” is never a valid conclusion of a single paper that does not perform any analysis to predict the practical value of future research. More research will always tell us something, but this is always true so not worth saying. “Different research that specifically does X would help answer a key unanswered question highlighted by the present research” is potentially useful, but requires some analysis and should be invoked only if there is nothing more interesting or useful to say about the research itself.

3. Conclusory statements (whether in the conclusions section or elsewhere in the paper) must be about the results and their discussion. If concluding statements could have been written before research is conducted, they are inappropriate. If concluding statements are based on something that is not in the analysis (e.g., normative opinions about what a better world would look like), they must be inappropriate. If concluding statements from a research report include the word “should”, they are probably inappropriate.

4. Citing peer-reviewed articles as if they were facts is wrong: All peer-reviewed papers contain errors and uncritical acceptance of what others concluded is equivalent to trusting tabloids. Even the best single study cannot create a fact. Write “A study of X by Smith et al. found that Y doubled the risk of Z” not “Y doubles the risk of Z”. And make sure that “X” includes a statement of the exposure, outcome, and population – there are no universal constants in epidemiology.

5. Never cite any claim from a paper’s introduction, or from its conclusory statements that violate point 3. If you want to just assert the claim yourself, that might be valid, but pretending that someone else’s mere assertion makes it more valid is disingenuous.

6. Avoid using adjectives unless they are clearly defined. A result is never generically “important” (it might be important for answering some particular question). Saying “major risk factor” has no clear meaning. In particular, avoid using the word “significant” either in reference to a hypothesis test (which is inappropriate epidemiologic methodology) or in its common-language sense (because that can be mistaken for being about a hypothesis test).

7. Avoid using colloquial terms that have semi-scientific meanings, especially inflammatory ones (e.g., toxin, addiction, safe, precautionary principle), unless you give them a clear scientific definition.

8. Do not hide specific concerns behind generalities. If a statement is not specific to the problem at hand, do not make it, or make it problem-specific. For example, it is meaningless to write in an epidemiologic paper that “limitations include confounding and measurement error”, without elaborating on specifics. You should instead explain “confounding may have arisen because of [insert evidence]”, etc.

9. Introductions should provide the information that the audience for your paper needs to know to make sense of your research, and nothing more. If a statement is common knowledge for anyone familiar with your subject (e.g., “smoking is bad for you”), leave it out. If a statement is about the politics surrounding an issue, and you are writing an epidemiology report, leave it out. If there is previous research that is useful for understanding yours, put it in (and not just a few cherrypicked examples – cover or summarize all of it).

10. Your title, abstract, and all other statements should refer to what you actually studied; if you were trying to proxy for something else, explain that, but do not imply that you actually observed that. For example, if you studied the effects of eating selenium-rich foods, refer to the “effects of eating selenium-rich foods”, not the “effects of selenium”.

11. Never judge a study based on its design per se but instead think about whether a design adopted is appropriate for answering the question of interest. There is no “hierarchy of study types” in epidemiology (e.g. there are cases where ecological study is superior to randomized controlled trial).

12. Unless you are analyzing the sociology of science, never discuss who did a study (e.g. what conflicts of interest you perceive or read about) because this leads into a trap of identity politics that has no place in science. Focus on quality of a particular piece of work not the authors’ affiliation/politics/funding.

13. Even as science seeks truth, it is still merely a human activity striving to reduce the range of plausible beliefs (read more on “subjective probability”, if interested); realize this about others’ work and do not try to imply otherwise about your own work. It is acceptable to present an original thought or analysis that requires no references to prior work. Indeed, if you are saying something because it was an original thought, searching for where someone already said it just to have a citation for it is misleading and deprives you of a well-deserved credit. Likewise, write in active voice (e.g. “I/We/Kim et al. conducted a survey of …” rather than “A survey was conducted …”) because it is the work of identifiable fallible humans, not the product of anonymous and infallible Platonic science.


Additional observations by CVP on some of the points:

Point 3 (including point 1, which is a particularly common and harmful special case of 3), along with point 9, represent the key problem with public health “research” in terms of the harm it does in society. Expert readers who are trying to learn something from an epidemiology study simply do not read the conclusions or introductions of papers. They just assume, safely, that these will have negative scientific value. However, inexpert readers often only read the abstract and conclusions, perhaps along with the other fuzzy wordy bits (math is hard!). This would not be a problem if those bits were legitimate analyses of the background (for the introduction) and of the research (for the discussion and conclusions). But that is rarely the case. Vanishingly rarely.

Instead, the introduction is a usually high-school-level report on the general topic, usually fraught with broadside-level simplifications and undefended political pronouncements, and the conclusions usually are not related to the actual study. As specifically noted in the guide, concluding statements to an epidemiology research report should never include any policy suggestions because there is no analysis of policy. There is no assessment of whether a particular policy or any conceivable policy would further some goal, let alone of all the impacts of the intervention. More important, there is not any assessment of the ethics or political philosophy of the particular goal, so there is no basis for normative statements. Having the technical skill to collect and analyze data conveys no special authority to assert how the world should be. None. NONE!!! (I just cannot say that emphatically enough.)

Indeed, the entire FDA e-cigarette “deeming” regulation (as drafted, and presumably also the secret final version) can be seen as a perfect example of these problems. It basically says: “Here are some [dubious] observations about characteristics of the world. Therefore we should implement the proposed policy.” As with a typical research paper, they do not fill in any of the steps in between. Why do the supposed characteristics of the world warrant any intervention? Why should we believe the policy will change those characteristics? What other effects will the particular intervention have? How do the supposed benefits compare to the costs? Literally none of these are considered. How can regulators get away with that? Because they are acting in a realm where policy recommendations are always made without considering any of that.

One technical note about point 3, the last sentence: That is phrased equivocally because a “should” statement can be about the actual research. E.g., “These results show that anyone who wishes to make claims about the effectiveness of a smoking cessation method should try to account for confounding and try to assess the inevitable residual confounding.” (Note that I do not actually know a paper that includes that conclusion, which is really too bad, but it could be a conclusion of our second-order preferences paper.) But it is the rare “should” statement that is actually about the implications of the research, rather than some unexamined political preference.

Those points then extend into point 5 (point 4 is more of a corollary to what comes below). Because there is basically no honesty control in public health or quality control in public health journals, unsupported conclusions are frequently cited and thereby turned into “facts” by repetition. It is never safe to assume that any general observation or conclusory claim that is cited to a public health paper is actually supported by the substance of that paper. In my experience this is only true about 1/4 of the time.

Point 13, about the subjective nature of science, is subtle and only touches of the surface of deep philosophical, historical, and sociological thinking. But it may be among the most important for protecting non-experts from the harms caused by dishonest (or just plain clueless) researchers. Scientific inquiry is a series of (human) decisions intermixed with a series of (human) assessments. For many mechanical actions – reading a thermometer or doing a particular regression – we have some very tightly drawn methods that (almost) exactly define an action and constrain subjectivity. But for many others, it is all about choices. Which variables were included in the regression, and in what functional form? That is a human choice. (Ideally, if there are multiple sensible candidates for those, they should all be tried, and all the results reported. Failure to do that is a candidate for the intermediate-level guide to common errors, rather than this basic one. In any case, which of the many results to highlight is still a choice.) Any author who tries to imply that their choices were other than choices — or worse that there were no other choices available — is trying to trick the reader.

Consider the discussion of meta-analysis from a week ago. The authors of the original junk meta-analysis, in their various writings, aggressively tried to trick readers in this manner. But many of the critics did no better, trying to claim that there are some (non-existent) bright-line rules of conduct that the original authors violated. The big-picture problem I noted is that there is no scientific value in calculating meta-analysis statistics in this situation, and rather obvious costs, and that ever trying to do so was absurd. But set that aside and consider the details: If you are going to do a meta-analysis, the choice of which studies to include is subjective. Should you exclude studies that obviously have huge immortal person-time selection bias problems? Honest researchers would generally agree (note: a human process) that you should. But what about studies that apparently have a little bit of such bias? Similarly, supposedly bright-line rules require subjective interpretation: Typical lists of rules for meta-analysis say that we should not include any study that does not have a comparison group. Ok, fine, but what if the study subjects are very representative of a well-defined population, and the authors use knowledge about that population as the comparison? What if it the subjects are less representative of that population, but the researchers still make the comparison? What if the researchers ostensibly included a comparison population in the study, but the two populations are too different to legitimately compare?

Similar confusion — naïve beliefs that standard approaches are not subjective choices, or that merely stating a standard creates a non-subjective bright line — can be found everywhere you look in and around epidemiology. If you have ever seen someone invoke “causal criteria”, it is undoubtedly an example of this. Those lists are not criteria, but merely suggestions for points to (subjectively) consider when assessing causation; they contain no rules. Use of logistic regression is the default method for generating statistics in epidemiology, but the assumptions it embeds are not necessarily reasonable and its use is a choice (perhaps driven by the researchers not knowing how to do anything else, but that is a different problem).

Now do not confuse “all science is subjective” with nihilism or “anything goes”. There is general agreement within families of sciences about good practice, and even many never and always rules. For example, you should not just assemble an arbitrary group of people and make up numbers, with nary a calculation in sight, and call those scientific results that can then be used in quantitative assessments. (Well, actually they do do that in public health, but it is an indefensible practice.) But note that “general agreement” is another sociological phenomenon. When a field has evolved good standards, it produces reliable science. We learn, despite the fact that if you peel back the layers there is no bedrock foundation on which the process is built. It all comes down to human action, not some magical rules. The scientific failing of the “public health” field demonstrate what happens when this sociological process breaks down.

A few final observations: The reference in point 6, about statistical significance being an inappropriate consideration in epidemiology requires a deeper explanation than I can cover today. This may strike many non-expert readers as surprising, given that many ostensible experts probably do not even understand the point. I have covered it before. Similarly point 11: If someone makes a claim about a particular study type being generically better (as opposed to “better for answering this particular question because [insert specifics]”), that mostly tells you they have a very rudimentary understanding of epidemiology. I devoted a post to point 10 just a few days ago, and I address point 7 quite frequently. The last sentence of point 4 is particularly important; anyone who describes an epidemiologic statistic (and even more so, a statistic about behavior or preferences) as if it were a universal constant clearly has little understanding of the science.

Glantz responds to his (other) critics, helping make my point

by Carl V Phillips

Yesterday, I explained what was fundamentally wrong with Stanton Glantz’s new “meta-analysis” paper, beginning with parody and ending with a lament about the approach of his critics who are within public health. Glantz posted a rebuttal to the press release from those critics on his blog, which does a really nice job of helping me make some of my points. I look forward to his attempt to rebut my critique (hahaha — like he would dare), which would undoubtedly help me even more.

Glantz pretty well sums it up with:

The methods and interpretations in our paper follow standard statistical methods for analyzing and interpreting data.

Continue reading

The bright side of new Glantz “meta-analysis”: at least he left aerospace engineering

by Carl V Phillips

Stanton Glantz is at it again, publishing utter drivel. Sorry, that should be taxpayer-funded utter drivel. The journal version is here and his previous version on his blog here. I decided to rewrite the abstract, imagining that Glantz had stayed in the field he apparently trained in, aerospace/mechanical engineering. (For those who do not get the jokes, read on — I explain in the analysis. Clive Bates already explained much of this, but I am distilling it down the most essential problems and trying to explain them so the reasons for them are apparent and this is not just a battle of assertions.) Continue reading

The key fact about ecig junk science: “public health” is a dishonest discipline

by Carl V Phillips

The latest kerfuffle around e-cigarette junk science comes from this toxicology study or, more precisely, this press release that is vaguely related to the study. Basically, a San Diego toxicology research group bathed cells in a very high dose of the liquids that come out of an e-cigarette, and eventually there were detectable changes in the cells. That is really all you need to know about the study’s actual results. (If you want more background, see Clive Bates’s post.) Contrived experiments like this provide nothing more than a bit of vague information that might someday lead to insight about the real world, though probably will not, and so might be worth exploring more using less ham-handed methods. That is all the information this type of research ever provides. No worldly conclusions are possible. It is vague basic science research that even at its best merely points the way for further research. Continue reading