New public health research: lying to people can affect them (as if they didn’t already know)

by Carl V Phillips

A new paper in the normally more-respectable BMC Public Health, by never-respectable ANTZ at the University of California (San Francisco) reports research that mostly showed that, if people were given disinformation claiming (nonexistent) health effects from smokeless tobacco and e-cigarettes, accompanied by gory pictures, then they will be tricked into to thinking the risk was higher. Surprise!

Well, of course, it is no surprise that people can be tricked and no surprise that UCSF “researchers” would conduct such unethical research. It is rather more of a surprise that the non-ANTZ BMC Public Health would publish it and that an ethics committee would allow it to be done. Ok, maybe not the latter — the ethics committees are pretty much in the pocket of public health. That committee at UCSF probably would never allow, say, Farsalinos’s survey of e-cigarette users, and would trump up some claim that it was a threat to the study subjects, whereas they allowed serial liars Lucy Popova and Pamela M Ling a free hand to tell people they might as well smoke.

Anyway, Clive Bates was first off the block in responding to this travesty, and he covered the breadth of it well, so I am not going to reinvent the wheel here.  Go read what he wrote first. Then come back to this, wherein I go deeper into a few specific points.

“Warning” labels

The title of the article refers to warning labels and the authors use that term to refer to the gory graphics that they showed people. But those are not warning labels. “Warning” implies the transmission of information about risks that someone might decide to act upon. A graphic warning would not be a scary photograph but a picture that actually conveyed warning information. The graphic might help it stand out from the text, such as those that appear on ladders with a stick-figure standing on the top step with a red slash through it. Or it might be for communicating with the pre-literate, like the “Mr. Yuck” image on a poisonous household products, or across language barriers.

Most important, a warning is something that communicates factual information that allows people to more accurately assess risks. The warning on the ladder does not convey the message “if you step on the top step, you will die from Ebola tomorrow.” If it did (imagine for the moment people would believe it), then it would definitely keep people off of the top step, but not because it warned them of the risk, but because it lied to them about a non-existent risk. A statement on a low-risk tobacco product that says, in effect, “this is as harmful as smoking” is not a warning, then, but a manipulative lie. Popova and Ling were testing the effects of lies, not warnings.

Moreover the gory graphics that “public health” people are so fond of, and that were used in the present study, are worse still: they do not convey information at all, true or false. That is why they always come with a caption — there is no information content in the picture. Instead the picture is emotional violence designed to manipulate people’s non-rational feelings. We[*] wrote about that at some length in a comment to the FDA on the topic and while the FDA ignored that, a lot of my language and reasoning found its way into the court ruling that blocked FDA for imposing graphic “warnings”, so apparently Judge Leon’s clerks read it, at least.

[*We = my THRo research shop which was later merged into CASAA but was not part of it at the time.]

It is not exactly news that you manipulate people’s behavior with emotional violence, and that emotional violence and other emotional manipulation is more effectively delivered via photographs rather than text or information-containing abstract graphics. Public health people are just borrowing the tools that have been perfected by warmongers and tyrants, people with the same willingness to sacrifice human welfare for their cause.

Policy recommendations

The authors’ main conclusion is:

Regulatory agencies should not allow “lower risk” warning labels, which have similar effects to the “FDA Approved” label, which is prohibited, and should consider implementing graphic warning labels for smokeless tobacco products and e-cigarettes.

Set aside the patently idiotic reasoning (“truthful information X had a similar effect to false claim Y by our simplistic measures, and therefore it must be bad to tell the truth”) and focus on should. That work implies that based on some stated goal, they believe that the evidence shows that the particular actions would further the goal to a degree that justifies their cost. As I pointed out in my comment addendum to my essay on the history of the public health pseudo-ethic (which some might have thought was already long enough without it :-), it is basically never ethical to make a policy recommendation in a research report. A report on a single piece of research does not even show that the scientific evidence as a whole supports the simple immediate conclusions that might be drawn from the one study in isolation. It certainly never contains an analysis of the complete ramifications — costs, benefits, real-world effectiveness — of a proposed policy. Thus recommendations like this are always unethical.

But this one is far, far worse than average.

I read through the introduction and rambling tangential final bits of the paper with this conclusion in mind. (And, yes, it was painful. Normally, when I am interested in research results, I never read those sections — and I recommend that strategy to everyone. They basically never contain useful information.) Their implicit justification for all of this are the authors’ declarations in the first paragraph that “tobacco use” is very unhealthy, which of course was all about the effects of smoking. They offer no justification at all in this paper about why someone would ever want to discourage smoke-free tobacco product use, whatever the method. None.

They report on countries that have imposed emotional violence graphics on packaging or are considering it, perhaps thinking that this is an argument it must be a good idea, and undoubtedly not having the intellect to figure out this would also serve as an argument that beheading people for witchcraft or drug possession is a good idea. They then recount efforts to try to get the accurate messages across, such as RJR’s initiative to change the inaccurate warnings on smokeless tobacco. They never once say that such efforts are a bad thing, let alone justify such a claim. They never deny that RJR was seeking to have the labels changed to being more accurate. They just assume that everyone else shares their religious extremist views about beheading …er… making people believe falsehoods about tobacco products, and therefore any attempt to change that must be evil.

The study, as they interpreted it, showed that changing the label to RJR’s proposed messaging would significantly lower people’s perception of the risk from smokeless tobacco. That is, it would move them closer to believing the truth, though they would probably still grossly overestimate the risk (this is not clear because the methodology is such a joke it is difficult to interpret the results). The authors obviously do not like people knowing the truth, but they never even say this is a bad thing, they just assume that their target audience already thinks that.

That is the extent of their (non)justification of their conclusions. There is no statement of goals. There is no actual statement (only innuendo) that it is better if people believe false negative information. There is no discussion of real-world effects of labels or whether their artificial and vague study results seem like they translate into anything real. There is not even a review of whether other research tends to support their specific conclusions (they lie with the usual “this is the first research to do exactly what we did” line to imply there is no other relevant research in the world; there is tons of relevant research — they were just too lazy to bother with it).

In other words, with no stated goal or justification, and no analysis of what acting on these recommendations might really do, they nevertheless make the recommendations.

I discussed at length the implicit and never-justified pseudo-ethic in public health — pursuing the longevity and purity of the bodies they wish to protect from the owners of those bodies, whatever the cost. But these authors make even that look good by comparison. They are making pronouncements about what should be done without even a hint of analysis that suggests it would fulfill even that goal.

Research ethics

In a field crowded with contenders, this is a solid candidate for the most unethical public health research I have ever seen. The mission of the research was purely political as were the conclusions, making it unethical for those reasons alone. There was no possible apparent benefit to the “research” — the way the study was designed, there was no possible way it could tell us anything useful, even setting aside the politics and misuse of the results. And yet it harmed the subjects who participated in it. They were forced to look at horrifying graphics that are not easily forgotten. (If scenes from the movie version of A Clockwork Orange are coming to mind, I am sure you are not alone.) Subjects were given messages that were flatly false and would tend to cause harm if believed, and apparently not briefed at the end that these were fictitious. The reader of the paper is also never told that such claims in the experiment as “e-cigarettes cause oral cancer” were made up from whole cloth by the authors.

The research was “approved by the Committee on Human Research (the IRB) at
the University of California San Francisco.” So either this ethics committee made the blatantly unethical decision to allow all of the above, or the researchers lied to the ethics committee about the research having value and the messages not being deceptive and emotionally violent (and if the latter, then the committee was still unethical due to its gross negligence in believing this). Moreover, this was not some random flight of insanity that was self-funded by the fanatic authors (as if these people would ever do something without getting paid for it — ha!). It was supported by a grant from the National Cancer Institute, making them complicit in the unethical behavior.

On top of that, BMC Public Health published it, a glaring failure of the journal review process. The research was unethical. The authors’ stated and implicit premises were false. The conclusions did not follow from the research, even if the false premises were accepted. The research was technically flawed (for reasons that pale in comparison to the ethical problems, but are nonetheless significant). And yet the “peer review” system that public health loves so much seemed to have no problem with it.

Conclusions

It follows from this that the NCI should be defunded, UCSF should be placed in the hands of a magistrate to keep them from doing further unethical research, the editorial board of BMC Public Health should be replaced, and the authors of this paper should be forced to personally apologize to everyone who was exposed to their misleading message. Because these authors, who are extremely unethical people, are in favor of warning labels, all warning labels of any kind should be eliminated.

Ok, that does not actually follow from the analysis. Though if you accepted the pattern of “reasoning” used by Popova and Ling, it would.

17 responses to “New public health research: lying to people can affect them (as if they didn’t already know)

  1. I SO didn’t want to read that whole thing to find this out, so thank you for doing so: ‘Subjects were given messages that were flatly false and would tend to cause harm if believed, and apparently not briefed at the end that these were fictitious. The reader of the paper is also never told that such claims in the experiment as “e-cigarettes cause oral cancer” were made up from whole cloth by the authors.’ — so, NOW….Who do we report this to? Preferably en mass as a CTA. I’m so tired of this. Sylvia Burwell? (replacing all NCI personnel?) The Regents of the University of California? (replacing the ethics body?) Are there ethical bodies with more brains and, more-importantly, more CLOUT than the UCSF quote-unquote ethical committee? Speaker Boehner (de-funding NCI?)

  2. As a social scientist, with an interest in protecting both the public and the reputation of my field, I am going to file formal charges of ethical violations against everyone involved in this study, from those who funded it to those who conducted it. This egregious behavior calls for all responsible people who care about the integrity tobacco research to ask questions and demand answers.

    • Good idea. Some of us are contemplating what would be optimal use of effort along those lines. No reason anyone should not just do their own thing, though, of course. If you make a move, please let me know what you do, who you figure out to contact, etc. — no use reinventing the wheel.

      • Here are the relevant ethical code sections (there may be others) that govern the actions of social science researchers. The lead author of this manuscript, Lucy Povova, has a PhD in communications from UC Santa Barbara.

        The Code of Professional Ethics for the Communication Scholar/Teacher
        (adopted by the NCA Legislative Council, November 1999) states in part:

        Communication researchers working in the social science tradition are urged to consult the APA guidelines for specific advice concerning the ethical conduct of social scientific research. (page 2)

        The American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct states in part:

        8.07 Deception in Research
        (a) Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational or applied value and that effective nondeceptive alternative procedures are not feasible.

        8.08 Debriefing
        (a) Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware.

        (b) If scientific or humane values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.

        I am alleging in my complaint that the authors knowingly exposed their research participants to a deceptive warning (that e-cigarettes can cause cancer), without adequate scientific justification for such deception. At no time did the authors attempt to inform the participants that the content of this warning was false. And, failing to adequately debrief the participants of this false information, they exposed them to unnecessary harm by leaving them with the false impression of considerable cancer risks associated with electronic cigarette use, which could negatively affect participants’ future health decisions regarding nicotine and tobacco use.

        I’m keeping my complaint narrowly focused on actions the authors took that can be directly tied to the specific text of the ethics code.

        • Carl V Phillips

          Thanks. Very helpful. The rules about deception were basically what I was referring to, though I could not quote them from memory of course (especially the bit where they misuse “debriefing” in their heading when they mean “briefing” — that is pretty funny).

          However, note that we do not know for sure that the authors did not inform the participants of the deception. I only noted that there is no indication. But the description of the methods is so crappy it obviously left a lot out, which might include this. Indeed, however, the reality it might have even been worse than nothing — they might have actually gotten a post briefing that told them not to believe the correct information for those who got it. So keep that in mind with this particular piece of a complaint. Also, you should include smokeless tobacco in that — Clive had good language in his post (in the submitted comment to BMC) which I suspect he will not mind if you borrow.

          Finally, it would be worth considering some of the rules of the University of California and of the journal and piling on the complaint rather than being so focused that they can argue the point on a technicality. I will be addressing the latter in a blog post shorty. It is on our list of things to do to look into the former. Again, anything you find is welcome.

    • Is there a “Filing ethics violation charges for dummies” book somewhere that I could read? Or does it only work if you are in certain communities? I’m not an academic or medical professional. But I pay the taxes to these people to protect my health, and to educate my children and grandchildren.

    • One thing I overlooked mentioning was that the authors declared they had no competing interests. But this is Biomed Central’s policy regarding that (see http://www.biomedcentral.com/about/editorialpolicies#CompetingInterests ):

      “Competing interests may be financial or non-financial. A competing interest exists when the authors’ interpretation of data or presentation of information may be influenced by their personal or financial relationship with other people or organizations. Authors should disclose any financial competing interests but also any non-financial competing interests that may cause them embarrassment if they were to become public after the publication of the article.

      “Non-financial competing interests

      Non-financial competing interests include (but are not limited to) political, personal, religious, ideological, academic, and intellectual competing interests. If, after reading these guidelines, you are unsure whether you have a competing interest, please contact the Editor.”

      These authors obviously have strong political/ideological as well as institutional competing interests.

    • This would be the best active news I’ve heard in a long time. I’ve often considered this form of activism to be a step that needs to be explored. Going further… complaints to state AGs? Maybe even federal (as we await who replaces Holder) depending upon the scope. Though what to do when a state AG sits on the board of Legacy . That in itself feels like a lapse in ethics alone.

  3. Pingback: New public health research: lying to people can...

  4. Pingback: Falsely exaggerating risks scares people off things – new study finds « The counterfactual

  5. Thanks for the comments. I’m still working on the text of the complaint letter and how things should be phrased. I have language to the effect that the manuscript does not indicate what the participants were told about the truth or falsity of the statements they were exposed to, and this is important information to determine the presence of ethical violations. When I was working at M. D. Anderson in the 00s, sitting on one of their internal IRB committees, I was constantly stuck by how the medically trained researchers didn’t know squat about even the most basic social science research methodology. I frequently had to reject and return protocols with obvious confounds or research design flaws that would be useless in addressing the stated research aims.

    With this in mind, I’m running through the various scenarios of what Popova and Ling might have actually done, given they were trained in communications and medicine respectively. If they were adhering to good research guidelines and ethical practice, they would have warned participants that they were about to see some tobacco product warnings that included disturbing images. They could have stated that these warnings were “proposed” or “hypothetical” but they wouldn’t have to. The key is what participants were told after the experiment was over. If I had reviewed their protocol for the IRB (assuming I didn’t reject the investigation on ethical grounds for being too trivial to justify exposing people to disturbing images), I would have insisted they fully inform the participants of the scientific evidence related to the warnings they saw, to avoid misconceptions under APA 8.08(a). Given Popova and Ling’s stunning lack of scientific rigor in conducting the study in the first place, I’d be surprised if they paid attention to what to them might seem like a fairly minor detail. Moreover, I also wouldn’t be surprised if they thought the warnings they provided were literally true and there was no need tell the participants anything. So, the crux of my complaint lies in determining whether the participants were left with misconceptions about the risks of e-cigarette use.

    As for the strategy in how to voice ethical complaints about this study, I’m okay with including a more comprehensive list of violations especially if there’s a section of the ethics code we can appeal to. I think whatever complaint we make, it should be made first to some governing body that has the power to sequester the study records before they can cover their tracks. I have a list of agencies and organizations and their addresses for such a complaint when the time comes.

    • Be sure to see my next post. It has a bit more about the reneging on the promise to provide a free sample, which I glossed over before, but is in some ways even worse. And it points out how the protocol was so badly designed that there was no chance it could show anything other than the obvious direction of impact of telling people things. Thus, the ethics of merely burdening human subjects with participation were already pretty dubious, even apart from the multiple deceptions. That is roughly the same as what you said much better in this comment.

      You seem to have a better operational plan than I do, and more expertise about IRBs. I will follow your lead.

      • (Oh, and what do you think the chances of any ANTZ actually reading these comments and advising them to cover their tracks before we act? I would guess pretty minimal (they don’t read), though not zero.)

        • Actually, I’m fine with the ANTZ tuning in and trying to warn them about a possible ethical complaint headed their way. From what I understand now after doing hours of research on ethics, how detailed ethics investigation can be, and the near unlimited power investigators have, I have some honest advice for the ANTZ to pass along to Popova and Ling. By all means, take a pass on the month off with pay and the extra ethics classes. Instead, do something that’s guaranteed to get you fired and run out of the profession if you get caught. I’m willing to gamble all my efforts against your career that you’re no more competent at covering your tracks than you are at doing research. Go for it.

  6. Pingback: What is peer review really? (part 4 — a case study) | Anti-THR Lies and related topics

  7. Pingback: The failures of peer review do not begin with the journal – more on the Popova-Ling fiasco | Anti-THR Lies and related topics

  8. Pingback: Real implications of the RSPH “sting” of ecig vendors | Anti-THR Lies and related topics

Leave a comment