by Carl V Phillips
Yes, the man whose superpower is an inhuman ability to willfully misinterpret study results and lie to the public based on that (and who is completely immune to the effects of evidence, logical argument, authors telling him he is interpreting their studies wrong, etc.) is complaining about research ethics. In particular, he is complaining about the recent Shiffman et al. paper which demonstrated that the prospect of interesting flavors did basically nothing to entice teenage non-users toward wanting to use e-cigarettes.
Glantz wrote an extremely weak letter to the journal that published the paper which, to its credit, rejected it. The study had some definite limitations and I would say that the authors did over-conclude from their results, which Glantz tried to say (a stopped clock is right twice a day, though is seldom so deliciously ironic when it is). But the basic conclusion of the study — that interesting e-cigarette flavors provoke a collective yawn among teenagers who do not use tobacco — is solid. It is this result that flatly contradicts a favorite claim of Glantz and his cabal and that Glantz, completely unsuccessfully, tried to challenge.
But it is not that (which I will return to), but rather this, that I find the most interesting part of Glantz’s missive:
There are also serious concerns about the ethics of the study. The authors state that the work was “exempt” from human subjects because they were using de-identified data collected by a third-party internet survey firm. While subject confidentiality is certainly an issue, so is the fact that Shiffman, et al. were subjecting youth (as well as adults) to stimuli that could increase the respondents’ likelihood to try an e-cigarette, thereby possibly introducing them to nicotine addiction. There is no acknowledgement of this risk to the subjects or steps taken after the survey was completed to mitigate these risks, much less obtaining informed consent from the minors’ parents or the adults participating in the study. Such studies typically include anti-tobacco education at the end to try and [sic] blunt the effect of any pro-tobacco or pro-e-cigarette effects of collecting the data.
Glantz does not understand human subjects ethics rules any better than he understands scientific inference, epidemiology, toxicology, the difference between liquids and solids, etc. There is simply no reason to require imposing on subjects with an anti-tobacco screed, or any follow-up information whatsoever, after they answer some impersonal and non-leading questions. Indeed, that would increase the burden on the subjects rather than decreasing it, exactly the opposite of the goal of human subjects protections. Glantz personally might want to impose anti-tobacco propaganda on people every time they answer survey questions. Indeed, he would probably want to do it every time they went to the bathroom. But that has nothing to do with ethics, except in the sense of it being unethical.
It would be different if the survey involved presenting misinformation to the subjects, and especially if the misinformation was not clearly presented as false hypotheticals. For example, the notorious “third-hand smoke” survey that the ANTZ claim showed there was concern, consisted of push-poll questions designed to make the subjects worry about this (nonexistent) risk, followed by asking them if they were worried. In that case, ethics concerns required a post-survey briefing, explaining that there is no conceivable health risk from this exposure, to avoiding harming the subjects via planting false information in their minds. Remember how much Glantz complained about that omission. Hmm, me neither.
Another example that would require a post-questioning briefing is the research by Popova and Ling which presented the subjects with misleading hypothetical warning labels for low-risk tobacco products, but apparently did not inform the subjects of the deception afterward. This was one of several genuine human subjects research ethics violations that I noted in my analyses of that travesty of a paper. Hmm, now that I think about it, that name rings a bell… oh, right, Glantz identifies Popova as coauthor on the letter I am quoting from. The irony just gets better and better.
Notice that these examples both contrast with the Shiffman study, in which the subjects were not led or misled (as far as we can tell — as is typical in public health research, the authors did not report their study methods). Assuming what we can guess about the survey from what was reported is correct, this type of survey presents no risk to the participants. None. It is a survey so there is no possibility of physical harm. There is no personally revealing data that could harm the subjects if it fell into the wrong hands (not even if they were identified, much less given that they were not). There were no soul-searching questions that could cause distress to the participants to contemplate. There is no misinformation. Indeed, even if you are someone like Popova or Glantz, who thinks (contrary to any real ethics rule) that giving people true information should be considered harmful to them, there is not even any of that — no one who is not living under a rock is unaware that e-cigarettes are available in a variety of flavors.
Now there is a grain of truth in what Glantz wrote. That “third-party data” exemption applies to analyzing data that someone else already collected on their own initiative; what Shiffman did (contracting with a vendor to collect the data) does not trigger that particular exemption. What Shiffman et al. should have said was that the only conceivable cost to the subjects from participating is the time they spent doing so. That makes it exempt from needing IRB (third-party human subjects committee) ethics approval, a process that is about guarding against real harm being inflicted upon people, particularly physical harm or the disclosure of sensitive information.
Now someone living in the bubble of a university might not realize this, so this is actually more of an everyday error than a Glantz-level error. Universities tend to require their employees to seek internal IRB approval for any research that has anything to do with people. This is partially mission creep and partially CYA overkill. It also turns out to be actively harmful to research because university IRBs are as politically influenced as any other aspect of university behavior. Real social science IRBs tend to immediately approve (or certify that no approval is needed) harmless survey research like Shiffman’s. But the health IRBs — the same people who routinely approve exposing people to experimental poisons in clinical research — have a habit of prohibiting surveys like this, pretending it could be harmful, but really applying a political litmus test. That, of course, is what Glantz would want. IRBs are tasked with acting like judges, but few people in the public health arena behave according to an ethical code like lawyers do. (Yes, go ahead and insert your favorite lawyer ethics joke here, but the fact is that they do have a code of conduct and sense of duty, and most adhere to it quite rigorously, in contrast with public health people.)
Anyway, the point is that Shiffman et al. were just doing the type of harmless survey that each of us gets in our inbox every day from someone. If that someone is anyone other than a university researcher, you can be sure there was no IRB approval for it. Now some journals subscribe to the mission creep rules and require anything they publish about human subjects to have formal IRB approval. But that is their choice, not the real ethics requirements. Results from these harmless non-IRB-submitted studies are published in various places all the time, and no one ever suggests there is a human subjects ethics concern — except when they just don’t like the results, of course.
Circling back to the substantive bit of Glantz’s letter, since there is amusement value to be had:
This conclusion is unlikely to be reliable because it is based on responses to a single question on interest in flavors that makes the results likely affected by floor (and ceiling) effects. Floor and ceiling effects occur when a measuring instrument is not sensitive enough to detect the real differences between participants when their answers are clumped at the low or high end of the possible range of values. An example of a floor effect would be testing mathematical knowledge using a problem that is so difficult that no one can solve it; thus, it will not reveal the true differences in mathematical knowledge.
Shiffman, et al. found almost no interest in any flavors of electronic cigarettes among teenagers who have never tried tobacco products (including e-cigarettes) and very low interest among adult smokers based on responses to a single question (albeit about 24 different flavors/products): “How interested would you be in using a [flavor] [product]?”
Um, yeah, that is kind of the point. Contrary to what Glantz and his cabal like to claim (without the slightest evidentiary basis), flavors are not intriguing enough to non-user teens that any of them cause the interest level to rise above almost zero. Glantz should really not try to write about technical points that he read something about once, but rather stick to what he really knows, like… um, never mind.
The floor effect would matter if the authors were attempting to assess which flavors were most attractive. If that were the point, then the fact that all the results were so close to zero would render any conclusions meaningless. But since the relevant observation is that all the results were so close to zero, no problem. To use his analogy, if the research question were not “who has more mathematical acumen” but “is this question so hard that almost no one gets it”, then hitting the floor is the useful result, not a flaw in the design. I wonder if Glantz objects to any survey where there is near-unanimity of results; that is what this implies.
The problem with just using a single question is that most people (especially those who are not yet tobacco users) are not interested in using a product even though they might be interested in trying it or using it in a specific situation.
LOLOL. So Glantz suddenly discovers that trying is not the same as using. That is one to throw back at him when he conflates those, as he always does.
He then rambles on with some stuff about asking multiple questions (when someone is interested in multiple measures), which has no relevance to his points and, disappointingly, it is also not all that funny.
Of course, he includes his usual ad hominem:
It is also important to note that e-cigarette manufacturer NJOY sponsored the study and participated in the study design. There is no discussion in the paper itself of how this sponsorship or the participation might have affected the design, particularly the flawed methodology discussed above, that would bias the results towards reporting a smaller effect of flavors, which is in NJOY’s interest.
Um, yeah. Just like Glantz himself always discusses in the body of the paper how the funders of his “research” were involved in it. Obviously NJOY, along with everyone else in this business, as well as e-cigarette consumers and anyone who is genuinely concerned about public health, appreciated having a study design that demonstrated the lack of enticement. Only people like Glantz who want to make the absurd claims that this study debunks would object to it. But the authors did actually state that the funder played no role in the details of the analysis (in the conflict of interest statement, where such discussion belongs). Now if Glantz’s letter had actually shown, or even claimed, that the fundamental study design could not measure something relevant, he might have a point. But his substantive content was limited to a (willful?) misunderstanding of what aspect of the results matters and what the relevant question was. So, again, kudos to the journal for rejecting this exercise in deceit.
Now it turns out that there were aspects of the Shiffman study design that had me shaking my head. I would go so far as to call them flaws, and ones that could have easily been avoided. But, not surprisingly, the pretend-scientists at UCSF did not actually identify any of those.
(h/t Brian Carter)
I have to give you credit for reading his entire post of whiny drivel. I got as far as “Much to our surprise, Nicotine and Tobacco Research decided not to publish the letter.” I laughed pretty hard at that, knowing how selective he is about comments on his own blog, but couldn’t bring myself to read any further.
Yeah, that was pretty funny in itself, wasn’t it. “Hey, how dare an anti-tobacco journal not publish any old anti-tobacco drivel that I send them?!”
Brilliant summarization!
When I conducted lab studies with smokers I was specifically looking for smokers who had no interest in quitting. Occasionally I hear rumblings that I should thrust a stop-smoking pamphlet in with the $25 I paid them as they left. Never had such a requirement wind up in the approved protocol. Nor have I heard from any of my fellow tobacco researchers that they were required to do so. And this is dealing directly with smokers, not asking people in the general population *about* tobacco products. So Glantz, as usual, is living in his own little world if he thinks this is “typical,” even in university settings like his.
Which brings up the question of what Popova and Ling did in their post-study briefing. I can’t see them passing up the opportunity to push their anti-tobacco views no matter who the subjects were or what they did to them. But I also can’t see them correcting the “ecigs cause cancer” lie because, as Popova put it, “the scientific literature on the harmful effects of e-cigarettes is by no means settled.” How that for a non sequitur?