[Update: For those who want more details of the criticism of the Dutra-Glantz paper, or are only interested in that and not the broader question of how to combat lies, I have posted a PubMed Commons comment here.]
Further on the critically important theme of my previous post, we are perhaps already starting to see a positive trend. The New York Times went as far as to identify one of Trump’s lies with the word “lie” in its top headline today. They did not go quite so far as to label him a “liar”, understandably, but that is implicit. Readers of this blog will recall my arguments for the importance of calling out liars as such. Piecemeal responses to each individual lie are a hopeless tactic and not effective. For one thing, you end up with this problem:
So it may be that legitimate news sources are adjusting remarkably rapidly. Perhaps they will extend that skepticism to their reporting on more technical issues, like health claims.
Hahahaha.
Well, I can dream.
For reasons similar to why that is a dream, I wonder whether the NYT and other mainstream press, or even the self-appointed “fact checker” types, will identify as “lies” utterly indefensible claims that are less immediately obvious. It is pretty easy to bury such lies in a patina of science or analysis that demands a bit more courage to declare it clearly false.
Since it is usually possible to dress up a lie with some such patina, bald lies that are just asserted seem to be a different tactic. They are not meant to be believed. They are a test (which, happily, the press is starting to come out on the right side of):
It is more complicated when those who really understand the claim can see it is clearly false or unsupportable, but the average news writer or editor cannot. That complication is why I have little faith this newfound skepticism will spill over into science reporting.
Consider the latest battle over e-cigarette junk science: this paper from Dutra and Glantz about the effect of e-cigarette availability on youth smoking, and these responses by Siegel and Snowdon. I can save you some reading, because the story is as simple as this: Dutra and Glantz took NYTS data for smoking rates among American minors. They fit a linear trend to the decline between 2004 and 2009 (which they declared the start of the e-cigarette era), and then observed that the trend did not change much for 2009 to 2014. From this they concluded e-cigarettes were not contributing to there being less smoking. They then told the press that this shows that e-cigarettes are causing more smoking.
Where to start? Actually, there is really only one place to start, one underlying problem that really matters. But that is the bit that requires a bit of scientific thinking to notice.
Naturally the critics called out the internal contradiction, where the supposed lack of decline in smoking associated with e-cigarettes was portrayed as e-cigarettes actually causing more smoking. As noted by Snowdon, however, not all of the press balked at even that obvious lie (this is arguably an example of the above point by Kasparov). A slightly more sophisticated critique is the observation that if we use a cutpoint of 2011 in the Glantz model (i.e., calculate the linear trend from 2004 to 2011, rather than 2009, and then compare to the linear trend for 2011 to 2014), then there is a steeper decline in the latter part compared to the former. One could argue, fairly persuasively, that 2011 is a better choice for testing for an inflection, since it is when some measurable usage of e-cigarettes began in this population.
But the real problem is not that Glantz chose the wrong cutpoint. The real problem is one that is common throughout epidemiology research, not just anti-tobacco propaganda: reporting a result that is entirely dependent on an assumed model without ever examining the validity of the model. Indeed, the trick is to never acknowledge there even is an assumption-laden model — e.g., by trying to justify the assumptions — and hope the reader does not notice. If someone argues that 2011 is a better cutpoint than 2009, and that therefore the conclusion should be there was an increase in the decline in smoking associated with e-cigarette use, they have already lost the war. They have conceded that the basic model is legitimate and the only reason to doubt the result is a he-said-she-said quibble over a detail. Most people who are not already predisposed to disagree with Glantz will tend to accept the original version of that detail — after all, that is what was officially published.
The proper critique based on the observation about switching to 2011 is not “my input is better than his” but “this model is completely unstable, producing fundamentally different results with alternative inputs which seem to be as reasonable as the original inputs, and thus any result from it is meaningless.” Notice the difference between this, which says the model has no information value at all, and the claim that a tweak to the model produces a result that the critic prefers. The latter actually validates the original claim in the eyes of many readers.
Moreover, there is this point:
Exactly that. And this is not actually an “Or…” point (which is to say, Bates had already pointed out other critiques in previous tweets). It is the fundamental problem that renders everything else irrelevant. Not only is the model too sensitive to the 2009 cutpoint to be considered reliable, but it entirely dependent on assuming that there would have been a linear trend for a particular ten-year period, modulo any effect of the introduction of e-cigarettes. This is an utterly absurd assumption, as the historical data shows.
This is further reason why the opposite affirmative claims are also unsupportable. That is, anyone who says “for the last few years we saw a drop greater than the linear trend, which suggests e-cigarettes caused a reduction” is also wrong. There are far too many variables to assume we should have expected a linear trend, let alone that a particular variable is the reason for the departure from it.
This is an example of those “anecdotes” you hear so much about, a concept that does not actually mean what most people think it means. If someone tells her story of how she prayed and that made her cancer go into remission, it is not evidence of the power of prayer. But this is not because the data takes the form of one person’s anecdote. It is because there are a lot of variables, and so there is no reason to assume that the one reported variable caused the outcome. And also because the story is only being told because of the coincidence of the two variables (sometimes known as “selecting on the independent variable” along with “selecting on the dependent variable”, or as “a-cell epidemiology”). So when someone tells his story of trying to quit smoking a dozen different ways over many years, and then trying an e-cigarette and quitting that day, it is useful data. Yes, it is still a personal story, but as noted, the problem with the prayer story as evidence was not that it was a personal story. The e-cigarette story includes reasons to be pretty sure he would not have quit that day by coincidence, as well as (presumably) the only major change on the quit day being the “exposed to e-cigarettes” variable. That is good data.
Now consider the Glantz model result. It has the relevant properties of “mere anecdote” despite being statistics. It was reported because of the particular association (or lack thereof). It was a one-off observation of only those variables, and so there was no reason to assume the smoking rate variable would have been different from what was observed if the e-cigarette variable had not changed, just like there was no reason to believe the remission would not have occurred without the prayer. Consider it this way: If we had a perfect model of human biology, such that we could measure that the person in the praying story was not destined for remission, but she prayed and went into remission anyway, then we would have an interesting observation. Similarly, if we had a perfect model that predicted population trends in youth smoking uptake, and then we shocked the system by introducing e-cigarettes, and rates changed from what was predicted by the model (in either direction), then we would have a decent reason to draw causal conclusions. However, we do not have that model, because it would be even more difficult to figure out than modeling cancer progression.
The big-picture point is that Glantz — unlike, say, McKee, Chapman, Zeller, or the random useful idiots — is a very skilled liar, burying some of his lies at a level that only a careful critic can identify them (though others are clearly intended at that Kasparov level). I am not trying to claim that I am exhibiting some stunning insight here. Dismissing a result because it is entirely dependent on model that includes major indefensible assumptions and is unstable over minor perturbations of inputs is not exactly out-of-the-box thinking. But the thing is, we seldom actually see such proper criticism of junk claims. And so Glantz pulls off the trick of getting his critics to implicitly accept most of his invalid assumptions when they quibble about the details. This is a liar’s greatest coup.
Compare the Trump lie about voter fraud. It was properly called out as a lie simply because there is no evidence to support it. If someone were to challenge the claim (of 3-5 million fraudulent votes) by, say, presenting affirmative evidence that there could not have possibly been that many such votes, that would be a fail. It would be an implicit concession (in the eyes of some audiences) that Trump’s claim was defensible, and the debate is just about quantitative details. It also concedes that it is up to critics to provide affirmative evidence against the assertion. Some commentators took that bait, but the NYT, Politifact, and others did not.
The real challenge to the press is when (if?) Trump manages to assemble a junk policy shop (or Adelson and the Kochs lend him their pet “think tanks”), similar to Glantz’s junk science shop. If instead of asserting a lie based on pure bluster, Trump dresses it up with a model, no matter how bad, will the press be able to step up? If he claims that his shop’s junk model shows that trashing the ACA will actually save money and make people better off, will it still (properly) be called a full-on lie. Or will the press be tricked into its usual practice of implying that the model should be taken seriously (an impression which sticks in the mind of the reader) before picking at its details (which does not dislodge the initial impression of overall validity). We shall see. But the habit of doing the same with utter junk tobacco control models does not bode well.
Can we enlist, say, the media group from the Union of Concerned Scientists to provide loaner expertise to the media so they don’t have to have 100 science experts on full-time staff to have a prayer of getting things at least remotely correct?
I am not exactly sure what their skilz are. Perhaps that would help. But it is actually a bit of specialization to be able to identify problems like this (notwithstanding the discussion about humility from Liam’s comment). A random scientist from a random field might have little familiarity with fields that are laden with unexamined modeling assumptions. They might have way more faith in journal review than is appropriate for health sciences. Then there are those who are likely to dismiss social science that is done as well as it could be, rather than distinguishing the utter junk from the merely imperfect (e.g., many physicists think more like Sheldon from Big Bang Theory than they do like Feynman). The lousy work of the big-name “debunker” types with newspaper columns demonstrate that the overarching rules of thumb often fail. So, yes, I am sure that would be a net positive. I am just not sure *how* positive.
Sadly for us, I don’t think the skepticism is really going to be extended much beyond aiming at Trumpisms. And if Trump were up there at the moment mouthing Glantz-nuttiness we’d be seeing the NYT etc totally ignoring the problems and, amazingly, praising him instead.
:/
MJM
” I am trying to claim that I am exhibiting some stunning insight here. ” Not your normal level of humility Carl :) (feel free to delete comment once sentence is corrected)
Nah, I’ll leave the comment (to emphasize my humility :-). I did fix the sentence though. Thanks!
Carl, as you have stressed many times, anybody with a decent understanding of scientific model building, and in particular on basic statistics, can appreciate that this article by Glantz (and many other ones you have stripped naked) are pure unadulterated junk. As you say, the issue is not an incorrect input, it is simply an incorrect model, it reduces to a lame attempt to utilise an excessively simplified cherry picking “questionary” toy model to study a complicated social phenomenon (smoking/vaping among experimenting youth) that involves multiple confounding variables. It is like attempting to build a Rolls Royce with the material used to build a toy tricycle for a 3 year old toddler.
The technical disabilities of practically all of the Glantzian research (because it is not only Glantz) are so evident and manifest that it is hard to understand why it has never been challenged by scientists outside Public Health (those inside PH will not hold the Glantzians to account for the political reasons we all know). It is not difficult to deconstruct most of Tobacco Control produced or inspired articles, not only on “gateway” theories, but on the claim of lethal harm from secondary or third hand smoke, not to mention malicious disinformation on hooka and cigar smoking and on snus and vaping.
A year ago I was naive enough to think that explaining the misconstruction of science behind these issues was sufficient to alarm scientists outside Public Health. Sadly, I’ve come around in circles, most of my colleagues (who could easily teach statistics to Glantz) are not receptive. One type of reaction is to question my impartiality: “you are a smoker” they say, “so you want to find an excuse to convince me that smoking is not so bad after all”. Another negative reaction comes from their personal dislike of smoking, usually coupled with their endorsement of the paternalistic attitude behind Public Health that is common among academic circles. The typical reply after I explain the issues is “… yes, you are technically right, second hand smoke does not kill and vaping is harmless, but lying about all this is a necessary evil to rid humankind of the filthy smoking addiction”. It is impossible to argue against such dogmatism (and PhD’s can be very dogmatic).
However, the main obstacle in showing scientists outside Public Health the lies and disinformation going on is the fact that they regard PH as a proper science institution. After all PH has all the external marks and trappings of the scientific institutions where we work (journals, peer-review, merit based academic hierarchy). They see my efforts to explain the abuse of science behind PH’s claims (second hand smoking, harm reduction, vaping) as peddling a conspiracy against the thinking and working of (what they regard) a proper science institution. Since, as scientists, we are always besieged by pedlars of anti-science conspiracy theories (specially in Cosmology, whose philosophical and existential connections attract such nut cases), it is not hard for many of my colleagues to identify PH critics with anti-science nut cases. While some critics of Public Health may fit the anti-science nut-case profile, because of the formal recognition of the scientific credentials of PH and because of its huge influence in the media and among the political class, it is not difficult for the Glantz’s and Chapman’s to paint all their critics as anti-science nut-cases.
In other words, it is easy to show that “the king is naked”, but those who benefit from this nakedness are too well entrenched for those seeing it becoming sufficiently motivated to raise their voice. However, I am certain that sooner or later the king will be officially declared naked.
@RS, Forgive my uninformed comment (what I don’t know about science would fill a black hole) but I feel compelled to say that this: “yes, you are technically right, second hand smoke does not kill and vaping is harmless, but lying about all this is a necessary evil to rid humankind of the filthy smoking addiction” is possibly the scariest thing I have read this week.
Carl, your comment on the Glantz-Dutra article was “removed by moderators”. Maybe you should consider posting it.
Thanks. I had been alerted. I intend to do something. Watch this space.
Pingback: The travesties that are Glantz, epidemiology modeling, and PubMed Commons | Anti-THR Lies and related topics