“Whatever happened to be measured” is not the same as exposure+outcome of interest

by Carl V Phillips

“Public health” research has countless failure modes: not understanding human motivations and other economic innumeracy; poor epidemiology methods that ignore advances since 1980; cherrypicking or just lying about evidence; childish faith in the journal review process; etc. But among the worst is drawing conclusions as if whatever happened to be measured in a study — often a rough proxy for one of many aspects of the phenomenon of interest — is a measure of the phenomenon of interest. (I know that sentence is a little dense; read on and it will become clear.)

In the previous post, I pointed out that there is no penalty in public health for dishonesty or genuine incompetence. The motivational example was an in vitro test that showed e-cigarette liquid had some measurable effect on cells, something that tells us nothing about real health risks, let alone quantifies them. Yet the authors emphatically concluded that this shows e-cigarettes are as harmful as smoking. That may seem like a cartoon example, but while this post was being drafted, this came in:

The researchers injected an almost homeopathic quantity (if we assume they are competent enough to have accurately reported their methods) of e-cigarette liquid into rats and observed metabolic changes (which one can guess must have been study error, but that is another story). They then claimed this had some relevance to human product use. Perhaps there is some information contained in this, but it is certainly not about the effects of vaping.

Those are silly examples from lab experimenters who know nothing about health or people, and are just stumbling around, looking for some application of the one poor tool they have in their toolbox. Any minimally competent reader can recognize that the claims are not supported by the evidence. But the problem is less obvious, and thus more harmful, for research that comes closer to measuring the outcome of interest.

Consider the “gateway” claims that are based on merely observing that the young people who use e-cigarettes are more likely than average to smoke. That is something we would predict if the gateway claim was true, so it is clearly related to the gateway claim. A competent researcher who was seriously investigating the gateway claim would want to know it (in contrast with those lab studies, which a competent researcher who was seriously assessing health effects would just ignore because they are completely uninformative). But observing that is not the same as observing that e-cigarettes are causing smoking. That is obvious when stated, but conflating those is exactly what they do: They measure something that is one piece of the puzzle you would want when trying to assess the claim, and then conflate that with having measured exactly the phenomenon of interest.

In the previous post I noted that public health’s junk science problem spills over onto opponents who are influenced by the public health way of thinking. In the gateway case, a common claim is that because underage smoking rates are going down where e-cigarettes are popular, there cannot be a gateway effect. This is an example of the exact same problem, measuring something that has some bearing on the question at hand and then declaring it to be isomorphic to (i.e., exactly the same as, substitutable for) what you really want to know. A gateway effect would affect those statistics, but those statistics are not a measure of the gateway effect.

I write this post because of a comment I recently posted to the PubMed paper comments system, about the Shiffman et al. study of teenagers’ responses to questions about e-cigarette flavor descriptors. You may recall that in that study (commissioned by NJOY and conducted by Pinney Associates, if you care about such things), nonsmoking teenagers were asked about whether particular e-cigarette flavor descriptors would interest them, and they showed almost no interest in any of them. There were some other bits too, but that is what really mattered. This has been widely interpreted as showing that teenagers are not attracted by interesting e-cigarette flavors. But does it really show that? One commentator challenged the conclusion in a rather scattershot way and Shiffman responded effectively. I responded to that exchange. You can find all of that here, but what follows is the entirety of what I posted, which should be self-explanatory.

—–

I read with interest the exchange between Robert Jackler and Saul Shiffman about this paper and would like to comment on two bits of the exchange. In general, I believe that Shiffman’s responses were compelling, and that they effectively rebutted Jackler’s criticisms. In particular, I agree with his assessment that Jackler basically starts with a premise that the paper’s conclusions are wrong and then speculates about what made them wrong, rather than actually building a case that anything was wrong.

Jackler makes various accusations (some overt, some innuendo) about Shiffman et al. being inappropriately influenced by their sponsor. Shiffman responds by mischaracterizing these as ad hominem. Ad hominem attacks are common in this space, and anyone who departs from a strict tobacco control party line will almost inevitably be the target of them. But the term is also misused in this space, and this is an example. Claiming “Shiffman et al.’s research should be ignored because they consort with those I declare to be the enemy,” would be ad hominem, but there is only a hint of that in Jackler’s comments. Instead, he mostly claims that the research was faulty because of influences of the funder on the design of the particular research. This may be a cheap rhetorical tactic and unsubstantiated innuendo – as Shiffman argues – and it is certainly insulting to the integrity of the authors, but none of that makes it ad hominem.

Jackler introduces one valid scientific concern, and it should be extended to a most of the research in this space. He argues (my paraphrase) that the expressed lack of interest in the products, whatever flavor descriptor was offered, mostly reflects the general hostility toward tobacco products that is inculcated in this population of teenagers. Schiffman responds to the criticism as phrased, quite legitimately, by arguing this was exactly the authors’ point, that flavors are not overcoming the programmed resistance to using the products. But the comment and response skirt the real scientific concern here: All of the research in and around this point produces rough measures that only partially inform the question of interest, which is, “what product characteristics cause different choices in the real world?”

Teenagers’ responses to abstract survey questions, posed by people who presumably feel a lot like the authority figures who have been instilling the anti-tobacco message, probably do just trigger the inculcated response. This makes them a limited measure of whether a flavor option will change someone’s choice when he is presented with the opportunity to try a product. The present study is clearly more informative than anything cited in support claims [sic — if there were no typos, who would believe I was the author?] that flavors have caused a torrent of underage use (let alone claims that attracting underage users is the purpose of interesting flavors, given that adults clearly prefer them; see, for example, my recent report of survey results at <https://antithrlies.com/2016/01/04/casaa-ecig-survey-results/&gt;). But what was measured is only one contributor to the actions of interest.

The failure to recognize that what is measured is not the same as what is being asked extends throughout this field of inquiry. Jackler asserts that looking at flavor usage patterns would be a better measure, and Shiffman correctly points out that this would be an answer to an different [sic :-)] question. It would tell us something that relates the question of interest, though it is even further removed than what the study measured. Yet it is quite common for opponents of product availability to claim that mere demonstrated preference for a particular product feature is evidence that the feature is causing use. Indeed, guessing at what Jackler alludes to as an “extensive body of research” about preferences for flavors of other tobacco products, this describes most of that research. Fuzzy and noisy observations that are probably associated with the question of interest can allow us to modestly update our beliefs. But commentators, including many research authors, make absolute claims, apparently oblivious to the necessary epistemic modesty.

Moreover, the common absolute claims (e.g., “this shows kids are not interested in flavors” or “this proves that flavors are attracting kids”) are absurd on their face. For any improvement in a product’s quality (such as the availability of a particular flavor), there are some combinations of individual preferences such that the improvement would tip someone’s preference about wanting to use the product. Since there are a lot of people, chances are some have that preference pattern for any substantial improvement (and this will include some “proper” and some “improper” users, if one is inclined to create such categories). The question cannot be, “are any kids motivated by the flavors?” (or by flavor descriptors, which is a somewhat different question), because the answer to that is surely yes. The question must be, “how many?” The Schiffman et al. results contradict the political claims that underage users are flocking to e-cigarettes in droves because they have heard about the particular flavors from the study, but absolute claims that have been made about the results are clearly false. Any author who seeks to make a scientific contribution in this area needs to explain, at least very roughly, how empirical results contribute to an economic model of preference and choice that can provide a quantitative estimate of the phenomenon of interest. (Anyone who wants to go further and claim that the phenomenon is substantially harmful, to whatever extent it occurs, must present separate analysis. This obviously does not follow from claims that the phenomenon is occurring, as many authors imply; it is entirely plausible that the material impact is nil or even beneficial.)

The absolutist rhetoric that dominates the policy fights in this area seeps into the science and poisons it, causing researchers to traffic in simplistic claims. Indeed, the rhetoric that causes the problems addressed above is exemplified in the second sentence of the abstract [“However, uptake of e-cigarettes by nonsmoking teens would add risk without benefit and should be avoided.”], in which the authors assert that e-cigarette use has no benefits apart from smoking cessation. This is obviously not true; if people are choosing an action, it is because it has benefits. But if the myth to the contrary is taken as a premise – effectively assuming that actions are caused by demonic possession rather than volition – it is very difficult to apply the economic reasoning sketched above. Researchers and commentators in this area give little indication they recognize that they are making claims about choices, which are volition that is a function of preferences, opportunity, and product characteristics. They need to assess how particular observations fit into a model of that process, rather than implicitly assuming that whatever happened to be measured is isomorphic to the outcome of that process.

——-

The first bit of my comment, though a separate point, actually relates to the point at hand. There is a lot of misuse of the term “ad hominem” to refer to any mention of a person or a group of people in the context of criticizing scientific claims. This is misguided and harmful to useful analysis. Questioning whether there some influence made a study’s design faulty is not ad hominem, it is a potentially legitimate criticism (unsubstantiated, in this case). Identifying someone as a liar because their writings contain lies is not ad hominem — it is roughly the opposite of it, and it serves a useful purpose. More importantly, it is not ad hominem to suggest that public health researchers typically conflate whatever they happened to measure with the phenomenon of interest, and therefore all of their conclusions should be viewed with suspicion that they have done this. That is just good inductive reasoning, the sensible way to respond to a common problem. It would genuinely be ad hominem to say, “that was written by X, who always writes fallacious conclusions, and therefore these conclusions probably are wrong,” though that does not make it unreasonable; understanding X’s history and its implications is quite useful.

Returning to the main point, it should be obvious that neither the original study nor Jackler’s proposed alternative study constitute a measure of exactly the phenomenon of interest. The study is not imperfect because NJOY directed Shiffman et al. to study the wrong thing. The study was imperfect because there is no way to use a survey method like this, or of actual flavor choices, to actually measure the impact of flavors on decisions. There are research methods that get closer to the actual question (e.g., in-depth interviews), but they are more difficult and so seldom done. But that is no excuse for pretending that whatever study was easy to do was a perfect measure of exactly the phenomenon of interest. Looking under the lamppost first is sensible, but pretending that constituted a thorough search of the whole area is junk science.

Since the conflation fallacies tend to lead to absolute claims, the legitimate quantitative claims are then lost. These are often sufficient to debunk the silly extremist claims, which then narrows the conversation (among honest people, anyway) to the realistic range. Thus the “knowing that there are flavors is not causing droves of teenage vaping” observation above, which debunks many claims, but does not mean there is no real-world causation. Similarly, the aforementioned population product usage statistics are sufficient to debunk silly gateway claims like “e-cigarette use is reversing the gains that have been made in reversing youth smoking,” even though they cannot possible show there is no gateway effect. Or, to illustrate with an example from another realm, the huge increase in private sector employment shows that it is silly to claim that the ACA and other recent US government policies are massive “job killers,” even though it could not possibly show that no one lost his job because of the policies.

In all of these cases, there is no possibility for rational (i.e., legitimate scientific) analysis if the somewhat informative data is either ignored or treated as definitive. As always, real scientific inference consists of making sense of whatever you do know. This means not pretending ignorance when there is useful information (saying that something “may” be the case when the evidence is already fairly clear, one way or the other). But it also means not pretending useful information is more than it is.

12 responses to ““Whatever happened to be measured” is not the same as exposure+outcome of interest

  1. I’ve been trying to convince my pet rat to give up his IV e-liquid habit. Hopefully this gets his attention.

    • Carl V Phillips

      Yeah, good idea. Especially considering that if you believe their methods — which seems like a stretch given their overall ineptitude — they injected just 1 mg of liquid to get the effects. Not 1 mg of nicotine worth, but just 1 mg of liquid. That is less than one drop.

      • Tobacco controllers almost invariably think e-liquid is 100% pure nicotine (which is likely the intended result of FDA and CDC incessantly referring to it as “liquid nicotine” in all their press releases). It would be no great surprise if this assumption found its way into their research methodology.

        I’m going to go have a light beer now; or, as I like to call it, “liquid ethanol.”

        • Carl V Phillips

          Hey, that’s my line: http://blog.casaa.org/2015/09/casaa-comment-on-fdas-proposed.html

          However, those people are not tobacco controllers. They are random lab rats(!) who have nothing useful to do and so have just hopped on the bandwagon. I would like to think that they have at least the minimal competence in chemistry to understand the difference.

        • Nate
          homeopathy , when it comes to booze, would be a ‘marriage at Cana’ wonder- Just wave a bottle of grand crue over a tub of water and bingo, cellar is restocked.

  2. Damn. I really liked that line when I thought it was mine. Guess it’s still pretty good though.

  3. Pingback: The bright side of new Glantz “meta-analysis”: at least he left aerospace engineering | Anti-THR Lies and related topics

  4. Pingback: Sunday Science Lesson: 13 common errors in epidemiology writing | Anti-THR Lies and related topics

  5. Pingback: SRNT believes research should be replicated (when they don’t like the results) | Anti-THR Lies and related topics

  6. Pingback: Newsflash: Tobacco companies are incredibly timid | Anti-THR Lies and related topics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s