[Update: For those who want more details of the criticism of the Dutra-Glantz paper, or are only interested in that and not the broader question of how to combat lies, I have posted a PubMed Commons comment here.]
Further on the critically important theme of my previous post, we are perhaps already starting to see a positive trend. The New York Times went as far as to identify one of Trump’s lies with the word “lie” in its top headline today. They did not go quite so far as to label him a “liar”, understandably, but that is implicit. Readers of this blog will recall my arguments for the importance of calling out liars as such. Piecemeal responses to each individual lie are a hopeless tactic and not effective. For one thing, you end up with this problem:
So it may be that legitimate news sources are adjusting remarkably rapidly. Perhaps they will extend that skepticism to their reporting on more technical issues, like health claims.
Well, I can dream.
For reasons similar to why that is a dream, I wonder whether the NYT and other mainstream press, or even the self-appointed “fact checker” types, will identify as “lies” utterly indefensible claims that are less immediately obvious. It is pretty easy to bury such lies in a patina of science or analysis that demands a bit more courage to declare it clearly false.
Since it is usually possible to dress up a lie with some such patina, bald lies that are just asserted seem to be a different tactic. They are not meant to be believed. They are a test (which, happily, the press is starting to come out on the right side of):
It is more complicated when those who really understand the claim can see it is clearly false or unsupportable, but the average news writer or editor cannot. That complication is why I have little faith this newfound skepticism will spill over into science reporting.
Consider the latest battle over e-cigarette junk science: this paper from Dutra and Glantz about the effect of e-cigarette availability on youth smoking, and these responses by Siegel and Snowdon. I can save you some reading, because the story is as simple as this: Dutra and Glantz took NYTS data for smoking rates among American minors. They fit a linear trend to the decline between 2004 and 2009 (which they declared the start of the e-cigarette era), and then observed that the trend did not change much for 2009 to 2014. From this they concluded e-cigarettes were not contributing to there being less smoking. They then told the press that this shows that e-cigarettes are causing more smoking.
Where to start? Actually, there is really only one place to start, one underlying problem that really matters. But that is the bit that requires a bit of scientific thinking to notice.
Naturally the critics called out the internal contradiction, where the supposed lack of decline in smoking associated with e-cigarettes was portrayed as e-cigarettes actually causing more smoking. As noted by Snowdon, however, not all of the press balked at even that obvious lie (this is arguably an example of the above point by Kasparov). A slightly more sophisticated critique is the observation that if we use a cutpoint of 2011 in the Glantz model (i.e., calculate the linear trend from 2004 to 2011, rather than 2009, and then compare to the linear trend for 2011 to 2014), then there is a steeper decline in the latter part compared to the former. One could argue, fairly persuasively, that 2011 is a better choice for testing for an inflection, since it is when some measurable usage of e-cigarettes began in this population.
But the real problem is not that Glantz chose the wrong cutpoint. The real problem is one that is common throughout epidemiology research, not just anti-tobacco propaganda: reporting a result that is entirely dependent on an assumed model without ever examining the validity of the model. Indeed, the trick is to never acknowledge there even is an assumption-laden model — e.g., by trying to justify the assumptions — and hope the reader does not notice. If someone argues that 2011 is a better cutpoint than 2009, and that therefore the conclusion should be there was an increase in the decline in smoking associated with e-cigarette use, they have already lost the war. They have conceded that the basic model is legitimate and the only reason to doubt the result is a he-said-she-said quibble over a detail. Most people who are not already predisposed to disagree with Glantz will tend to accept the original version of that detail — after all, that is what was officially published.
The proper critique based on the observation about switching to 2011 is not “my input is better than his” but “this model is completely unstable, producing fundamentally different results with alternative inputs which seem to be as reasonable as the original inputs, and thus any result from it is meaningless.” Notice the difference between this, which says the model has no information value at all, and the claim that a tweak to the model produces a result that the critic prefers. The latter actually validates the original claim in the eyes of many readers.
Moreover, there is this point:
Exactly that. And this is not actually an “Or…” point (which is to say, Bates had already pointed out other critiques in previous tweets). It is the fundamental problem that renders everything else irrelevant. Not only is the model too sensitive to the 2009 cutpoint to be considered reliable, but it entirely dependent on assuming that there would have been a linear trend for a particular ten-year period, modulo any effect of the introduction of e-cigarettes. This is an utterly absurd assumption, as the historical data shows.
This is further reason why the opposite affirmative claims are also unsupportable. That is, anyone who says “for the last few years we saw a drop greater than the linear trend, which suggests e-cigarettes caused a reduction” is also wrong. There are far too many variables to assume we should have expected a linear trend, let alone that a particular variable is the reason for the departure from it.
This is an example of those “anecdotes” you hear so much about, a concept that does not actually mean what most people think it means. If someone tells her story of how she prayed and that made her cancer go into remission, it is not evidence of the power of prayer. But this is not because the data takes the form of one person’s anecdote. It is because there are a lot of variables, and so there is no reason to assume that the one reported variable caused the outcome. And also because the story is only being told because of the coincidence of the two variables (sometimes known as “selecting on the independent variable” along with “selecting on the dependent variable”, or as “a-cell epidemiology”). So when someone tells his story of trying to quit smoking a dozen different ways over many years, and then trying an e-cigarette and quitting that day, it is useful data. Yes, it is still a personal story, but as noted, the problem with the prayer story as evidence was not that it was a personal story. The e-cigarette story includes reasons to be pretty sure he would not have quit that day by coincidence, as well as (presumably) the only major change on the quit day being the “exposed to e-cigarettes” variable. That is good data.
Now consider the Glantz model result. It has the relevant properties of “mere anecdote” despite being statistics. It was reported because of the particular association (or lack thereof). It was a one-off observation of only those variables, and so there was no reason to assume the smoking rate variable would have been different from what was observed if the e-cigarette variable had not changed, just like there was no reason to believe the remission would not have occurred without the prayer. Consider it this way: If we had a perfect model of human biology, such that we could measure that the person in the praying story was not destined for remission, but she prayed and went into remission anyway, then we would have an interesting observation. Similarly, if we had a perfect model that predicted population trends in youth smoking uptake, and then we shocked the system by introducing e-cigarettes, and rates changed from what was predicted by the model (in either direction), then we would have a decent reason to draw causal conclusions. However, we do not have that model, because it would be even more difficult to figure out than modeling cancer progression.
The big-picture point is that Glantz — unlike, say, McKee, Chapman, Zeller, or the random useful idiots — is a very skilled liar, burying some of his lies at a level that only a careful critic can identify them (though others are clearly intended at that Kasparov level). I am not trying to claim that I am exhibiting some stunning insight here. Dismissing a result because it is entirely dependent on model that includes major indefensible assumptions and is unstable over minor perturbations of inputs is not exactly out-of-the-box thinking. But the thing is, we seldom actually see such proper criticism of junk claims. And so Glantz pulls off the trick of getting his critics to implicitly accept most of his invalid assumptions when they quibble about the details. This is a liar’s greatest coup.
Compare the Trump lie about voter fraud. It was properly called out as a lie simply because there is no evidence to support it. If someone were to challenge the claim (of 3-5 million fraudulent votes) by, say, presenting affirmative evidence that there could not have possibly been that many such votes, that would be a fail. It would be an implicit concession (in the eyes of some audiences) that Trump’s claim was defensible, and the debate is just about quantitative details. It also concedes that it is up to critics to provide affirmative evidence against the assertion. Some commentators took that bait, but the NYT, Politifact, and others did not.
The real challenge to the press is when (if?) Trump manages to assemble a junk policy shop (or Adelson and the Kochs lend him their pet “think tanks”), similar to Glantz’s junk science shop. If instead of asserting a lie based on pure bluster, Trump dresses it up with a model, no matter how bad, will the press be able to step up? If he claims that his shop’s junk model shows that trashing the ACA will actually save money and make people better off, will it still (properly) be called a full-on lie. Or will the press be tricked into its usual practice of implying that the model should be taken seriously (an impression which sticks in the mind of the reader) before picking at its details (which does not dislodge the initial impression of overall validity). We shall see. But the habit of doing the same with utter junk tobacco control models does not bode well.