New statistics about vape risk misperception (and a subtle extra-bad implication)

by Carl V Phillips

A new paper in JAMA Network Open by Jidong Huang et al. from Georgia State University provides some new statistics about just how effective the war on vaping is, in terms of the average American’s perceptions of risk. Despite working for one of FDA’s pet research shops, the authors make clear their opinion that it is bad that so many people think that vaping is as harmful as smoking or worse.

The paper is short, but really it should be shorter — just a Methods section and these two graphs:

Huang graphs of vape risk perception

The first graph is from the authors’ own survey, while the second is from an NIH survey. Both surveys have some issues, but those do not really matter much for present purposes.

The leap-off-the-page takeaway from this is that knowledge about the relative harms has moved in the wrong direction. Fewer people understand that vaping is less harmful than smoking, and this means merely less harmful, let alone how much less. (The NIH survey has a “much less” response option, but these authors bundled the results with “less” to match their own survey’s limited response options.) More people believe they know that vaping is more harmful, or at least just as harmful.

Of course, we knew all this already, from various other sources of data. The authors inexplicably did not even bother to break the results out by other variables. In particular, we do not know how current smokers’ perceptions — which is what matters most in terms of the harm caused by these false perceptions — compare to the population-average ignorance. So this paper adds little other than a news cycle bump. I find it interesting, however, because of that “don’t know” category.

It is possible to be somewhat optimistic and say that the misperception trend plateaued around 2015. However, it seems likely that the current deluge of anti-vape propaganda, unleashed over the last year and a half by the FDA and the TCers who call their tune, has made it worse again. Moreover, the “don’t know” answers add another reason for pessimism.

Just throwing in a “don’t know” option is a pretty dubious survey method, though these authors seem rather proud of having done so. People have very different thresholds for giving that answer. Some will give another answer if they have a hint of an inkling, while others will choose “don’t know” even if they are fairly sure. A few might even be deep thinkers who do not believe it is possible to know anything other than the existence of your own consciousness, and so answer “don’t know” any chance they get.

Less whimsically, these thresholds are probably associated with subjects’ perceptions. For a survey question of “What would be the best policy for dealing with North Korea?”, I would suspect that those who choose “don’t know” are more informed than those who think they do know, and if the former were forced to pick an answer their distribution would be quite different from those who did answer.

A better approach would be to ask two questions, one where subjects are forced to choose a value and a second where they are asked about their level of confidence in that answer. There are a lot of interesting things that can be done with that combination. Subjects who really have absolutely no guess at all (“Which of the following is the Akkadian word for horse?”) could still just skip the first question, as happens for many questions on most surveys. But we have what we have, so what can we make of it?

Obviously everyone who gave the blatantly wrong answers about the risk from vaping did not know, but just did not know they did not know. So as we watch the “don’t know” population in the first graph diminish over time, almost all shifting into the “about the same” category, we are not observing a reduction in ignorance. Rather, we are seeing a replacement of knowledge (knowing you don’t know is knowledge, and is presumably accurate) with misperception.

This trend is every bit as bothersome as the increases in the false beliefs themselves. As people “learn” enough about this topic to change their self-perception from “don’t know” to do “know”, almost none of them actually join the category of people who do know. It is bad when we (advocates, science writers, public scientists, government agencies, etc.) fail to dislodge the false beliefs of those who already have them. It is worse when we fail to even move those with no strong beliefs toward the truth rather than away from it.

Of course, there is shifting among all of the categories over time, and there is little way to figure out how much. This is not a question that lends itself to tracking in a cohort — if you ask people this question and they know you are going to ask it again, you dramatically change what they will learn in the meantime. So, of course, some people went from knowing they were ignorant to knowing the truth. But they were roughly cancelled out by those who went from knowing the truth to “knowing” something false.

The worst part is that someone who has become committed to a false belief, even for the most tenuous of reasons, is much more difficult to persuade of the truth. I have written at length about this before, but in short: Once someone commits to a belief, mere evidence that they were wrong — no matter how much more solid it is than the original reason for the belief — is seldom enough to change their mind. It is necessary to also affirmatively convince them of an alternate narrative about why they came to the wrong belief, and why this was understandable and not their fault. A few percent of the population think like scientists and delight in the beauty of learning a belief was wrong. Most people find it — consciously or subconsciously — to be an insult to their honor/intelligence/worth to be told that even their most trivially-held beliefs are wrong.

Many of us find ourselves using that alternate-narrative strategy fairly often in personal interactions. (“It is all a political game, like ‘weapons of mass destruction’ was. You have been successfully taken in by intentional propaganda. It is not your fault — we all have to believe most of what the authorities tell us about matters we have no expertise in. But in this case it is easy to see they are wrong if you think about….”) But most of us find that the conversion rate is pretty poor even with that tactic.

Thus, even if the (abysmal) distribution across the other three categories does not get any worse over time, a sharp reduction in those who know they have something to learn is a huge setback for getting more people to understand the truth.

3 responses to “New statistics about vape risk misperception (and a subtle extra-bad implication)

  1. And this is precisely why I switched from CONSTANT participation at a vapers’ forum — where I was in effect “preaching to the choir” — to being MUCH more active as an advocate, in whatever limited ways I can — mostly online. When I began participating at Quora, I was shocked and appalled at the degree and prevalence of total ignorance about vaping vs. smoking. So I continue to seek to fill that vast vacuum with actual FACTS about vaping. Vapers’ forums are marvelous places to learn the various technicalities of vaping… but for advocacy efforts, they’re nearly useless, because 95% of those at the forum already know most of the facts, or are in the process of learning them.

  2. The Dark Ages return! Both charts could be titled “Effectiveness of Lying to the Public”, thus achieving the accuracy they now lack.

  3. Thank you, Carl. Insightful as usual.

Leave a comment