Category Archives: Background

My first Patreon science lesson tutorial

by Carl V Phillips

I wanted to let my readers know:

1. In case you missed it, I have launched a Patreon account in order to be able to keep doing this blog, my Twittering, and such. Please consider becoming a patron to support my contributions, via my Patreon page.

2. I have started the first science lesson tutorial at that page (more explanation at the above link). You can join in the discussion or just read along with it here.

Re those tutorials at my Patreon page: The first one is open access, but future ones will be the premium content for patrons. I should make clear that donations to my Patreon are primarily about keeping my public writing going. The premium content is an added bonus for supporters, which you can take advantage of or ignore — either way, your support will help keep this blog going.

Footnote: Paper review posts

This is a prepositioned footnote to explain a series of posts I will be publishing.

I expect to soon be launching a major project that will publish a large number of proper peer-reviews of recent journal articles and some other papers in the THR space. (Fair warning to anyone planning to publish junk in the near future!) So, in order to lay in some material for that, develop protocols, learn-by-doing, and such, I am writing some entries for that collection now. Given that I am doing it, I might as well post them here. To find those posts, look in the comments section below for pingbacks.

The publications in this collection will not read like a typical blog essay, though they will be readable and reasonably free-standing, unlike a peer-review for a journal. For those familiar with the latter genre, think of them as a thorough and high-quality journal review — a rarity, I know — with a few hundred words added here and there to make it readable as an essay for someone not intimately familiar with the original paper. (And also with what would have been “the authors should fix this” phrasing changed to be phrased in terms of “the authors made this mistake”, because they also made the mistake of finalizing their paper before seeking the advice they needed to fix it.)

For those not familiar with journal reviews, just know that these pieces will not just address one or a few interesting points, in a narrative style, and not bother with the rest of the paper, as an essay would. They will have those interesting bits, but they will also step through a protocol for addressing each aspect of the paper (e.g., is the literature review in the Introduction legitimate, are the Methods adequately presented, etc.). Some of the bits will probably require reading the original paper to make sense of. For the reviews that I write, I will try to put any interesting narrative bits first, and make those free-standing. This will offer something to casual readers, and if you are not interested in the full review; you can stop reading when you get to the disjoint bits about other aspects of the paper.

That is basically what you need to know to make sense of what you are reading. Once I have the guidelines more developed, I will post a link here if you want to delve deeper. In particular, I will be recruiting freelance contributors to write reviews, so if you are qualified and interested, please take note.

Phillips and Burstyn departure from CASAA

by Carl V Phillips

CASAA (The Consumer Advocates for Smoke-free Alternatives Association), the publisher of this blog up until today, has decided to focus its resources on mobilizing responses to state and local regulation proposals and it thus cannot devote substantial resources to science and education efforts. The position of chief scientist (held by me since its creation) has been eliminated and thus my role with CASAA has ended. Citing a lack of interest in an organization with this change of focus, Igor Burstyn resigned his membership on the CASAA Board of Directors and is also no longer associated with the organization.

I will be keeping possession of this blog, though obviously nothing from this point forward will be written on behalf of CASAA as the previous posts were. I am not entirely sure what I am going to do with it. For the moment I will keep posting, resources permitting, though I expect I will focus on big-picture and deeper issues rather on critiques of individual bits of bad science (as I have been tending toward for the last several months anyway). While the former is not such great click-bait as the latter, it is important that someone keeps on it. Even though it is not as widely read for entertainment, I notice that the deeper analysis does trickle into the wider conversation. I think my scientific education efforts regarding THR over the last decade, along with Rodu’s, Bates’s and others’, have empowered enough people to be able to do the hot takes on the individual bits of junk science, so my efforts there are not so important as they once were.

Of course, it is possible that whatever I do next (I am still trying to sort that out) will preclude me from continuing the blog and related research and analysis. I might I go off in a completely different direction, such that I cannot keep up with this. Or I might take a position that is not compatible with speaking freely. We shall see. Continue reading

Lies and conflict of interest

by Carl V Phillips

Chris Snowdon understands what constitutes true conflict of interest, and provides us with two critical observations about it relating to THR.  I point out that he understands it because most people who invoke the concept, especially those who make it a centerpiece of their rhetoric, clearly do not have a clue about what it means.

Conflict of interest occurs, to put it clearly enough that even those who harp on the concept might understand, when there is an interest someone is supposed to be serving (perhaps due to their job description, but possibly just because of how they are representing themselves), but there is something about that individual that might cause them to favor some other interest that is conflict with the one they are supposed to be serving.  Notice that this is not remotely similar to the usual naive misunderstanding of the concept, that COI exists if and only if someone has received funding from industry.  Indeed, notice the funding is not only not sufficient for COI to exist, but it is not necessary.

For example, when Stanton Glantz humorously tried to take on Igor Burstyn’s science, he had a severe conflict of interest:  He was pretending to offer a scientific analysis, and thus was obliged to fulfill the interest of being an honest scientific analyst, but because Glantz is really motivated entirely by personal politics when he is pretending to be a scientist, there is a conflict of interest.  By contrast, if Glantz has published his screed and had made clear his real goal — “here are some talking points that those of you who wish to deny what this study demonstrated can use” — then there would have been no COI.  This is because his personal politics would be perfectly aligned with the interest he claimed to be serving, so there could be no conflict.  Whether some interest creates a conflict obviously depends on what interest you are claiming to serve.  That should seem rather obvious, but again is apparently completely over the head of almost everyone who presumes to make a big deal about COI.

In this brief but completely damning post, which he quite rightly describes as the “conflict of interest of the week”, Snowdon reports on a director at a smoking cessation clinic complaining about the success of e-cigarettes in taking away his clients.  The director includes an attack on vaping as part of that.  So, what interest is a government-funded smoking cessation employee suppose to be pursuing?  Smoking cessation, of course.  If he is objecting to successful smoking cessation it must be because he is more interested in serving some other interest, say, keeping his job.

In this much longer post, Snowdon explores how COI seems to being the defining factor in government decision-making about e-cigarettes.  The interest of a government official is supposed to be the interests of her constituents or the people.  Snowdon recounts a recent story that, by itself, is the typical naive COI story:  The expert panel that advised the UK MHRA on e-cigarettes included people who had gotten grants and contracts from pharma companies.  Left by itself, that is a major *yawn* — everyone with any skills in health research has gotten grants/contracts from pharma companies.  But treating that as if it were somehow noteworthy in isolation is exactly what most commentators did.  Would the influence from such funding alone cause someone to lie?  Doubtful.

But contrast, Snowdon goes on to put this in the context of the outright bribery that seems to dominate EU parliament votes related to tobacco and THR.  Of course, the EU is pretty much purpose-built for COI.  The representatives are so incredibly far from their constituents, and barely monitored by them or the press, and live in a disconnected world that is all about power and connections and prestige (and thus the handing out of material benefits).  Europe still loves monarchy (though if you have real monarchy there is no COI because there is no obligation to the people — l’etat c’est moi).

Snowdon leaves it to us to connect the dots, but I think he is pointing out that perhaps we should be rather more wary about observations that might suggest there might be a hidden COI (even though they do not themselves represent a real COI) among decision makers.  A history of a few grants and contracts means nothing, unless it is a corner of a serious COI scandal.  It is as if we were considering tobacco industry behavior and we were still living in 1975 (which, apparently, most of tobacco control thinks is the case) and there were subtle hints of tobacco company influence over decision makers.  That would be in the context of definitive evidence of improper influence elsewhere, and would suggest extra scrutiny.  Except in this century, of course, it is everyone other than the tobacco industry that seems to warrant the extra scrutiny.

Finally, it is worth noting that while the prospects of personal financial gain might explain the behavior of the MEPs, I actually think it is way down the list of important COIs that caused people to lie about e-cigarettes in the other cases.  Tobacco control and smoking cessation people are probably much more conflicted by the desire to personally be responsible for health benefits, a self-centered but not precisely selfish motive.  That is, they are desperately afraid that the goal will be achieved in spite of them rather than instead of them.  This interest conflict severely with the goal itself when the world moves on and the goal is better served by what others are doing.

CASAA’s take on the recent move by MHRA

For those who read this blog but not the main CASAA blog, you might be interested in our assessment of the UK MHRA’s move to require that e-cigarettes be approved as medicines.  You may find our analysis of what regulation by MHRA would look like to be somewhat more optimistic than what you might have read elsewhere.  However, we are rather more concerned than some other commentators about its implications for the EU.

Why most health policy recommendations are lies

by Carl V Phillips

I taught a class today to a group of public health students, with the theme that policy recommendations made based on an empirical study of a risk factor (e.g., an epidemiology study about the health effects of a behavior, or a study of the chemicals found in e-cigarette vapor) are never justified.  Or, in the terms of this blog, are lies (not in the sense that they do not reflect the authors recommendation, obviously, but the claim that the recommendation follows from the study is a lie).  There are five distinct reasons why making such recommendations are inappropriate.  That list is, I think, rather informative for disciplined thinking about promoting THR, so I thought I would share a summary of the basic points from the class with my rather larger audience here.

I started out by asking them if they had ever read a paper where the authors do a single study about a possible risk factor and then make broad policy pronouncements at the end.  I interrupted before they answered to assure them that I was joking – they are in public health, so of course they have read papers like that.

As motivating examples for the discussion, I had them read the post from a few days ago, about proposals to either ban cigarettes or drastically reduce the nicotine content, and read enough to know about plans to develop nicotine “vaccines” that would prevent someone from experiencing the effects of nicotine.  I also threw in Bloomberg’s soda ban (I love it when the lead headline in the New York Times is on-topic for the day’s class!).

The reasons why it is a lie to tack on policy recommendations to a risk-factor study:

0. The results of the study might not be right.

I did indeed start the counting at zero because this one is a bit different.  It is not about the wisdom of the policy, but about the study result itself.  A single study does not give us a definitive assessment that an exposure causes a particular outcome.  If it is the only such study that exists (which is rare — happens only once per exposure+disease, obviously) there is still whatever other knowledge we might have.  In theory a good paper could review the other evidence and draw conclusions about the totality of the evidence, but that is exceedingly rare (it usually requires a dedicated review paper to try to do that).  Thus, the implication of the particular study in isolation cannot even tell us too much about the risk, let alone how to respond.

Note that this applies to studies that suggest there is no risk.  Indeed, even more so.  The same possible errors that might cause a single study to exaggerate a risk could also cause it to miss a risk that really exists.  In addition, there are plenty of ways to do a study that will miss a phenomenon even if it exists.  Thus, pointing to a single study and claiming it is evidence that we do not need to act is an even greater mistake.  (Thus the reason that I and CASAA make it a point to avoid doing that.)

1. The proposed policy might not accomplish the goal.

It might be that an exposure is really causing a disease, but that a specific proposed intervention might not actually reduce the disease even though a naive knee-jerk impression say it might.  It might even be that no conceivable intervention could accomplish the goal, so even a general “something should be done to…” recommendation cannot be justified.

For example, Bloomberg has been furiously attacking the overturning of his soda ban by repeating observations about obesity being a problem.  But would banning 20 ounce Cokes do anything significant to reduce obesity?  The best guess is “no”, but more important, there is no reason to believe the answer is “yes”.  Governments like to engage in the “logic” of saying “there is a problem and something must be done; this is something; therefore this must be done.”

2. The intervention might create other harm (in the same realm where it is intended to do good — i.e., it might cause other health problems).

Bloomberg also moved to make food less flavorful (by reducing salt); this tends to make people want to eat more and thus become obese.  The proposal to reduce the nicotine in cigarettes would make them less appealing, no doubt, but it would also cause many people to smoke more of them.  The question of whether an intervention might cause other health problems is not answered by the study of a particular exposure+disease combination.

3. There will be costs to implement the policy; is it worth it?

The question of policy making becomes far more complicated still when we realize that most policy actions entail costs, often quite substantial.  No risk-factor study could possibly address this.  Assessing the costs and benefits of a policy generally requires more analysis than an entire risk factor study.

Why not just ban smoking?  If it worked, it would eliminate the health costs.  One reason is that the costs (causing people to lose the benefits of smoking and enforcement costs) would be enormous.  On a less dramatic level, even if Bloomberg’s plan would reduce obesity some, would that be enough to justify the various rather high costs?  It does not appear that anyone bothered to ask that question.

4. Is it ethical to do (even if it would work)?

This is, of course, the question that generates the most animated conversations.  I will not rehash the basic libertarian arguments here.  Nor will I attempt to delve into more subtle points.

But I will mention an observation I made to the students:  Some portion of the population would probably support giving their kid a vaccination that would prevent the child and the adult he will someday be from experiencing any benefits from nicotine.  Some portion of the population would argue that it should be mandatory (or close to), like the pertussis vaccine.  But probably roughly the same portion of the population would favor a hypothetical vaccine that would ensure that the kid is not gay or a similar magic bullet that would prevent him from ever embracing the teachings of the Koran.

The implication of that is that “public health” — the activist movement, as opposed to actual public health — is a special interest group filled with people who do not seem to realize that the interventions it demands are widely considered just as deplorable as anti-gay or anti-Islam interventions.  I took the opportunity to point out that any student who was planning to go into “public health” (as opposed to working in some more acceptable way to improve people’s health) should realize that they are on the wrong side of history.  While policy advocates in that area were once, legitimately, considered heroes, the generally celebratory reaction to Bloomberg’s plan being struck down by the court should give them pause.

[There are some concrete implications of this list for THR advocacy.  I will come back to that in a later post.]

Book Review – “Electronic Cigarettes; what the experts say” James Dunworth and Paul Bergen eds.

by Carl V Phillips

This new short electronic book (available for a nominal purchase price at Amazon USA and UK, with free reader software available for all typical platforms) collects interviews that the editors conducted from 2009 to 2012.  Most, though not all, are indeed experts on the topic.  Four (myself included) are researchers who had published research on e-cigarettes at the time of their interviews, and the remaining 13 are researchers in related fields, political operatives, commentators, and community leaders including CASAA’s own Elaine Keller and ECCA’s Chris Price.

(A few disclosures about relationships: I brought Paul into THR work and he worked in my THR research group at University of Alberta School of Public Health for about five years, longer than anyone else, and we did numerous projects together.  I have also coauthored with James (he collected the first survey data about e-cigarette users and I volunteered my group at UASPH to analyze it).  The proceeds from sales will be split between CASAA and ECCA, though given the low sale price, this is probably less a source of bias for me than the choice to feature me as the first interview in the book.  James is an e-cigarette merchant.  Paul has done paid work for James, presumably including this book, and now works for an e-cigarette trade association.)

The interviews – mostly written exchanges apparently, though mine and a few others are transcripts of oral interviews – are a mixed collection of snapshots.  Some provide in-depth views of the subjects, while others are broad overviews.

Some of the older interviews provide interesting historical perspective.  In 2009, I expressed worry about a contaminated batch of e-cigarettes (or, more precisely of e-cigarette liquid) causing acute poisonings.  I am genuinely surprised that over three years have passed with no such incident, and I think it is still a possibility.  As noted by the editors in a comment, there are self-regulation systems in place to reduce this risk, but there are still far too many wild cards in the market.  It is a good reminder that the authorities whose duty it is to try to ensure quality of what people buy, and thus prevent such an incident, have already wasted years pursuing bans rather than doing their jobs – something else I noted in 2009.  If such an incident does occur now, there will be no denying their guilt in letting years pass without attempting to provide any regulatory guidance.

The time capsule provided by the 2009 interviews is also a good reminder about how little historical memory the e-cigarette community has, a good reason for producing a book like this.  The best response to the FDA anti-e-cigarette propaganda has not changed from what I and others observed in the interviews, and yet we need to keep re-writing this same information.  Of course any frustration from that pales in comparison to what comes from trying to get the information beyond our community.  The interviews are a reminder that if the current incarnation of CASAA (and ECCA) had been active in 2009, we might have had a better shot at capping the damage done by the US government’s lies.

The 2009 interview with Adrian Payne, formerly of British American Tobacco, has a similar feel to my interview from the same year.  At that time, for example, it was generally recognized that most of what we know about the risk from e-cigarettes is extrapolated from our knowledge of smokeless tobacco.  That is barely less true now, and so it is an interesting reminder of how quickly this was forgotten as the e-cigarette community emerged over the last four years.  Nowadays, the belief that e-cigarettes are low risk mainly traces to the fact that this was believed in 2012, and that belief in turn is an echo of what was believed in 2011.  There is remarkably little awareness that this recursion traces back to 20 year of research on Swedish and American smokeless tobacco, and my calculation that it is roughly 1/100th that of smoking, coupled with our best guess – which has stood the test of a few years – that there was nothing about e-cigarettes that would make them substantially more harmful than smokeless tobacco.  Overly precise claims that suggest we know more than this about the risk from e-cigarettes, claims which are basically just made up based on nothing and repeated, can be found in several of the more recent interviews.

The 2011 interview with Scott Ballin focused on his optimism about finding common ground between real public health advocates, the ANTZ (a term that he would presumably not use, and indeed that had not been coined yet), and other factions.  He specifically was optimistic about the FDA Center for Tobacco Products serving as an honest broker.  Two years is not a long time, and things could change, but the trend definitely does not support his optimism.

The elegant gem in the book, in my view, is the interview with David Sweanor, which ranged across general observations about the past and future, making it a somewhat better fit for a collection like this than some of the other chapters.  I disagree with several of the specific points Dave made, but the broad sweep was insightful and well crafted as a whole.  Sadly, if that interview were simply reposted today without a date, the reader would be hard pressed to notice any clues that it is almost four years old.  A lot of details have changed in that time, but the overview narrative has seen limited progress.

The recent Clive Bates interview provides another nice overview of the arguments for THR.  Once again, the historical observation from this is that the same observations could have been made at the time of the earliest interview in the book (though Bates was not working in this area at the time), and are basically what I and others (including Bates, during his previous incarnation in the field) have been writing for more than a decade.

Other interviews focus on the details and are basically current.  These were conducted in greater depth, and so offer somewhat different value compared to the historical snapshots.  Konstantinos Farsalinos offers interesting observations about the situation in Greece and his view about optimal research strategies.  The Riccardo Polosa interview is more of a typical journalist exploration of a single study and its results, as well as some details about the situation in Italy.  Elaine Keller provides a great discussion of the recent US politics of THR, as well as her compelling personal story.

The story of Chris Price and ECCA and ECF is interesting, and much of it was news to me.  His biting insight (in what is really more of an authored essay than an interview) makes it valuable reading even for an expert on the topic, though the reader should be cautioned that even I think he is perhaps he is a bit too cynical in some of his observations (yes, it is possible to be too cynical, even when observing opposition to THR, though I think it does lead to exactly the right conclusions about what we should be doing).

More generally, the reader should realize that this is definitely not a reference book, and is not designed to be a primer on the topic for someone just learning about THR or e-cigarettes.  There are some statements by interviewees that are out-and-out wrong and at least one of the interviews would make many readers decidedly less knowledgeable if all the content were believed.  Many other statements are defensible but debatable, and the reader will not be aware of that debate without extensive outside knowledge.  Thus, the book functions best as a “reader” – a collection of thoughts for someone who already has a general understanding of the topic and is able to bring some critical thinking.

The fact that the book consists of interviews that were intended to be free-standing short overviews that emphasized the hot topics of the month creates several limitations.  The questions asked in the interviews were somewhat random, with most interviews tending toward a general overview rather than a focus on the particular individual’s expertise.  Non-scientists were asked about as many scientific questions as the scientists, for example.

The interviewers do not seek to illustrate differing views and do not probe points of controversy.  Some contrasts are immediately apparent, such as Price vs. Ballin (and points in between) on whether there is any value in we genuine advocates for consumers and health trying to work with the “public health” power brokers.  But there are few questions, other than general overviews, that were put to more than one interviewee, and thus extensive background knowledge is required to observe the evolution of thinking and points of disagreement (and to sort out one from the other).  It is there, and it is interesting reading when put side-by-side, but it does require some thinking beyond the content of the text.  Most interviews include some general statement about harm reduction being a good idea and the politics arrayed against it being deplorable, but only one or two include any further details, offering the reader limited opportunity to explore nuances of those views.

A knowledgeable or very careful reader will notice a few other contrasts.  Some interviewees have worked in tobacco harm reduction for a long time, while others became interested in e-cigarettes specifically, usually as a result of personal or family experience.  Some have done key research while others are pundits and activists who have made use of that research.  But those divisions do not correspond to the outline of the book and are not highlighted, and so even careful readers may remain unaware.  Similarly, ethical or ideological differences – those who support consumer freedom versus those who grudgingly accept THR as merely a poor substitute for abstinence, for example – are somewhat apparent, but are not probed in the interviews.

The evolution of the thinking of the interviewers themselves is apparent.  The questions posed in the later interviews definitely make the content more useful for a collected volume.  Presumably at some point during their process, the editors started to envision creating this collection, and there are rumors that they will continue this process in another volume.

A few paragraphs of context about each interviewee and some background on the subject matter covered in the chapter would have aided most readers.  This would have dramatically changed the feel of the book, though, and so presumably it was intentional on the part of the editors to let the interviews stand on their own with only a few sentences of biosketch as an introduction.  I probably would have made a different choice if I were the editor, but no one ever accused me of having a light touch.  (Indeed, I suppose this review provides some of the additional observations that I might have added, had I been writing introductions to the chapters.)

With the cautions in mind, I would suggest that anyone who regularly reads my work or otherwise has an interest and some background in the topic will want to throw a few pennies to CASAA and ECCA and get this book.

People who report health risks as percentage changes are (often) liars

by Carl V Phillips

I have been having an ongoing conversation with Kristin Noll-Marsh about how statistics like relative risks can be communicated in a way that allows most people to really understand their meaning.  There is more there than I can cover in a dozen posts, but I thought I would at least start it.  I have created the tag “methodology” for these background discussions about how to properly analyze and report statistics (“methodology” is epidemiologist-speak for “how to analyze and report data”).

Most statistics about health risks are reported in the research literature as ratio measures.  That is, they are reported in terms of changes from the baseline, as in a risk ratio of 1.5, which means take the baseline level (the level if the exposures that are being discussed are absent) and multiply by 1.5 to get the new level.  This is the same as saying a 50% increase in risk.  It turns out that these ratios are convenient for researchers to work with, but are inherently a terrible way to report information to the public or decision makers.  There is really no way for the average person to make sense of them.  What does “increased risk, with an odds ratio of 1.8” mean to most people?  It means “increased risk”, full stop.

Every health reporter who puts risk ratios in the newspaper with no further context should be fired (some of you will recall my Unhealthful News series at EP-ology).  But the average person should not feel bad because it is likely that the health reporter — and most supposed experts in health — cannot make any more sense of it either.

The biggest problem is that a ratio measure obviously depends on the size of the baseline.  When the baseline is highly stable and relatively well understood, then the ratio measure makes sense.  This is especially true when that deviation from the baseline is actually better understood than actual quantities.  So, for example, we might learn that GDP increased by 2% during a year.  Few people have any intuition for how big the GDP even is, so if that were reported as “increased by $X billion” rather than the ratio change, it would be useless.  Of course, that 2% is not terribly informative without context, but the context is one that many people basically know or that can easily be communicated (“2% is low by historical standards, but better than the recent depression years”).

By contrast, to stay on the financial page, you might hear that a company’s profits increased by 10,000% last year.  Wow!  Except that might mean that they profited $1 the year before and got up to $100 last year.  Or it might be $1 billion and $100 billion.  The problem is that the baseline is extremely unstable and not very meaningful.  This contrasts yet again with a report of revenue (total sales) increasing by 50%, which is much more useful information because a company’s sales, as opposed to profits, are relatively stable and when they change a lot (compared to baseline), that really means something concrete.

So returning to health risk, for a few statistics we might want to report, the baseline is a stable anchor point, but not for most reported statistics.  It is meaningful to report that overall heart attack rates are falling by about 5% per year.  The baseline is stable and meaningful in itself (the average across the whole population), and so the percentage change is useful information in itself.  This is even more true because we are talking about a trend so that any little anomalies get averaged out.  By contrast, telling you that some exposure increases your own risk of heart attack by about 5% per year is close to utterly uninformative, and indeed probably qualifies as disinformative.

As I mentioned, the ratio measure (in forms like 1.2 or 3.5) are convenient for researchers to use.  You probably also noticed me playing with percentage reporting, using numbers you seldom see like 10,000%.  This brings us to the reporting of risk ratios in the form of percentages as a method of lying — or if it is not lying (an attempt to intentionally try to make people believe something one knows is not true), it is a grossly negligent disregard for accurate communication.

Reporting a risk ratio of 1.7 for some disease may not mean much to most people, but at least that means it is not misleading them.  There is a good way to explain it in simple terms, something like, “there is an increase in risk, though less than double”.  If the baseline is low (if the outcome is relatively uncommon) then most people will recognize this to be a bad thing, but not too terribly bad.  So the liars will not report it that way, but rather report it as “a 70% increase”.  This is technically accurate, but we know that it is very likely to confuse most people, and thus qualifies as lying with the literal truth.  Most people see the “70%” and think (consciously or subconsciously), “I know that 70% is most of 100%, and 100% is a sure thing, so this is a very big risk.”

(As a slightly more complicated observation:  When these liars want to scare people about a risk, they prefer that a risk ratio come in at 1.7 rather than a much larger 2.4.  This is because “70% increase” triggers this misperception, but”140% increase”, while still sounding big and scary, sends a clear reminder that the “almost a sure thing” misinterpretation cannot be correct.)

The problem here is that people — even fairly numerate people when working outside areas they think about a lot — tend to confuse a percent change and a percentage point change.  When the units being talked about are percentages (which is to say, probabilities, as opposed to the quantities of money like the above examples) that are changing by some percentage of that original percentage, this is an easy source of confusion that liars can take advantage of.  An increase in probability by 70 percentage points (e.g., from a 2% chance to a 72% chance) is huge.  An increase of 70 percent (e.g., from 2% to 3.4%) is not, so long as the baseline probability is low, which it is for almost all diseases for almost everyone.

There seems to be more research on this regarding breast cancer than other topics (breast cancer is characterized by an even larger industry than anti-tobacco that depends on misleading people about the risks, and there is also more interest in the statistics among the public).  It is pretty clear that when you tell someone an exposure increases her risk of breast cancer by 30%, she is quite likely to freak out about it, believing that this means there will be a 1-in-3 chance she will get the disease as a result of the exposure.

Reporting the risk ratio of 1.3 will at least avoid this problem.  But there are easy ways to make the statistic meaningful to someone — assuming someone genuinely wants to communicate honest information and not to lie with statistics to further a political goal or self-enrichment.  The most obvious is to report the relative risk based on the absolute risk (the actual risk probability, without reference to a baseline), or similarly report the risk difference (the change in the absolute risk), rather than ratio/percentage.  This is something that anyone with a bit of expertise on a topic can do (though it is a bit tricky — it is not quite as simple as a non-expert might think).

Reporting absolute changes is what I did when I reported with the example of 2% changing to 3.4% (or, for the case of 1.3, that would be changing to 2.6%).  The risk difference when going from 2.0% to 3.4% would be 1.4 percentage points, or put another way, you would have a 1.4% chance of getting the outcome as a result of the exposure. Most people are still not great at intuiting what probabilities mean, but they are not terrible.  At least they have a fighting chance.  (Their chances are much better when the probabilities are in the 1% range or higher, rather than the 0.1% range — once we get below about 1% intuition starts to fail badly.)

To finish with an on-topic example of the risk difference, what does it mean to say that smoke-free alternatives cause 1% of the risk of serious cardiovascular even (e.g., heart attack, stroke) of smoking?  [Note: that this comparison is yet another meaning of “percent” than those talked about above — even more room for confusion!  Also, this is in the plausible range of estimates, but I am not claiming it is necessarily the best estimate.]  It means that if we consider a man of late middle age whose nicotine-free baseline risk is 5% over the next decade, then his risk as a smoker is 10%.  Meanwhile, his risk as a THR product user would be 5.05%.  Moreover, this should still be reported as simply 5% (no measurable change) since the uncertainty around the original 5% is far greater than that 0.05% difference.