Tag Archives: RCTs

Sunday Science Lesson: Why people mistakenly think RCTs (etc.) are always better

by Carl V Phillips

I recently completed a report in another subject area which explains and rebuts the naive belief by non-scientists (including some who have the title of scientists but are clearly not really scientists) that some particular epidemiologic study types are always better, no matter what question you are trying to answer. I thought it might be worthwhile to post some of that here, since it has a lot of relevance to studies of THR.

Readers of this page will recall that I recently posted talking-points about why clinical trials (RCTs) are a stupid way to try to study THR. A more detailed version is here and the summary of the summary is: RCTs, like all study designs have advantages and disadvantages. It turns out that when studying medical treatments, the advantages are huge and the disadvantages almost disappear, whereas when trying to study real-world behavioral choices of free-living people the disadvantages are pretty much fatal and what are sometimes advantages actually become disadvantages. Similarly, some other epidemiologic study designs (e.g., case-control studies) are generally best for studying cancer and other chronic diseases, which are caused by the interplay of myriad factors that occurred long before the event, but are not particularly advantageous for studying things like smoking cessation. Asking someone why he thinks he got cancer is utterly worthless, but asking someone why he quit smoking can provide pretty good data. Continue reading

Simple talking points on RCTs not being a very useful way to study tobacco harm reduction

by Carl V Phillips

I have composed this at the request of Gregory Conley, who recently had the nightmarish experience of trying to explain science to a bunch of health reporters. It is just a summary, as streamlined as I am capable of, of material that I have previously explained in detail. To better understand the points, see this post in particular, as well as anything at this tag. For a bit more still, search “RCT” (the search window is the right or at the top, depending on how you are viewing this). Continue reading

This works in practice, now we just need to see if it works in theory

by Carl V Phillips

The title refers to a classic joke about economists, describing a common practice in the field: Something is observed in the real world — say, the collapse of the Greek economy, insurance prices dropping under the ACA, or people lining up to buy new iPhones in spite of already owning perfectly good old iPhones — and the theoretical economists scramble to figure out if their models can show that it can really happen. In fairness, that way of thinking is not as absurd as it sounds. Developing a theory to explain an observation is good science, so long as it is being done to try to improve our models and thus better understand reality and perhaps make better predictions. Obviously, the ability or inability to work out the model does not change what has happened in reality. Continue reading

Why clinical trials are a bad study method for tobacco harm reduction

Following my previous post and my comments regarding a current ill-advised project proposal, I have been asked to further explain why randomized trials are not a useful method for studying THR. I did a far from complete job explaining that point in the previous post because the point was limited to a few paragraphs at the end of a discussion of the public health science mindset. So let me try to remedy that today. Continue reading

How the medicalized history of public health damaged its science too, particularly including understanding ecigs (fairly wonkish)

by Carl V Phillips

This week, in my major essay (and breezy follow-up), I argued that the dominance of hate-filled nanny-staters in public health now is actually a product of medic and technocrat influence more than the wingnuttery itself. The worst problem there has to do with inappropriate goals that stem from a medical worldview morphing into a pseudo-ethic. The seemingly inevitable chain of events created by that pseudo-ethic resulted in public health professionals hating the human beings who we think of as the public because we are a threat to what they think of as the public, which is just the collection of bodies we occupy.

But this is not the only damaging legacy in public health of the thoughtless application of medical thinking. The science itself has also suffered, most notably (though far from only) because of the fetishization of clinical experiments (aka RCTs: randomized controlled trials) and denial of research methods that are more appropriate for public health. This is something I have written and taught about extensively. I will attempt to summarize it in a couple of thousand words. Continue reading

Sunday Science Lesson: mistaking necessity for virtue in study design

by Carl V Phillips

Yes, I have written versions of this before, but I never tire of the topic, mostly because of how much damage the errors do to science and health policy.  I get reminded of it every time I travel through a European or European-influenced airport.

Most scientific knowledge (which is just a fancy way of saying “knowledge” — I am just coopting the phrase from those who try to imply that the adjective is meaningful) comes from easy observations — e.g., “there are a lot more women than men in this population” requires only looking around.  Sometimes a bit of knowledge of interest gets a bit more complicated and we need to actively use measurement instruments — e.g., “this is heavy” is easy, but “this has a mass of 44.21 kg” requires careful methods and a good scale.  Finally, something that we want to know might be completely beyond our ability to assess without complicated methods — e.g., “does a lifetime of exposure to E double the risk of disease D” requires a complicated statistical analysis of thousands of people.  The point here is that just because those methods are necessary for the latter does not mean they are necessary — or even useful! — for easier observations.

To elaborate on the concept, I will start with my favorite analogy to it:  Airports/stations need to communicate to thousands of people when their plane/train/bus leaves and where to board it, and until we all have reliable connectivity in our pockets (good realtime phone apps personalize our information and can make this all moot), this will continue to be provided using overhead displays.  These were originally written and updated by hand, and then replaced by some amazing and clever mechanical devices, and are now video monitors.  But fundamentally nothing has changed, and that is the problem, because airports are not train stations, or more particularly, flights are not train trips.

Consider what you naturally know and can easily remember when you arrive at an airport or train station.  Most obvious to you, you know your identity, which is sufficient to find your vessel (using the phone app, though it has always been sufficient to visit the check-in desk), but we still need displays which are quick to access, instantly updated, and always available.  To make the displays usable you need to know something other than your identity.  You surely know where you are going and approximately when you are leaving, and this is all you need to identify your flight.  There are seldom multiple departures from one airport to a particular other airport at close to the same time (particularly since you also easily remember which airline you are flying).  This does not work so well for trains, however, because it can be that almost half the trains leaving a station make a particular stop because every train going a particular direction passes that station and stops.

This also means there is a difference in what can be communicated via the monitors, because planes land in just one place, whereas the same train stops several or dozens of places and not all can be listed.  Thus, train stations are forced to have their passengers to drill down further and make an effort to remember something that is not quite so intuitive: the exact minute of the scheduled departure time, which is how they identify the vessels (usually along with one target destination, either the end of the line or the most major station on the way, which is intuitive to remember).

You probably see where I am going with this:  American airports and those following their style display departing flights based where they are going, alphabetically by city.  This a great system since everyone knows where they are going and is so skilled at searching by alphabetical order that they can quickly glance to the right range of the list to find the city name.  European-style airports have been designed by people who seem to think they are train stations, and list flights by the minute of departure.  This is a bad system because it requires passengers to make the extra effort to remember or check the exact minute of departure, and to step through a list of ordered numbers with varying gaps, which is much harder than alphabetical order because you cannot use instant intuition like “I am going to Philadelphia, so I will start direct my glance to 3/4 of the way through the list”.

Like a complicated cohort study or clinical trial, the train-style listing is a cost necessity under particular conditions.  But such necessity is not a virtue of the method.  “It is needed at train stations, so it is the best we can do there” clearly does not imply “it is always best.”  Similarly, “we cannot figure out whether this exposure causes a .001 chance of that disease without a huge systematic study and a lot of statistic analysis” does not mean “we cannot figure out that e-cigarettes help people quit smoking without such a study”.  Even more absurd is the “reasoning” that leads to: “we cannot figure out which medical treatment works better without a clinical trial” and therefore “we cannot figure out if people like e-cigarettes without a clinical trial”.

Needless to say, the latter statement in each sentence is obviously false, and the proposed equivalences are moronic.  Just because the extra complication and effort is needed to ask a hard quantification does not mean that it is needed for an obvious qualitative conclusion.  Anyone who actually understands science at the grade-school level realizes that different research is needed to answer different questions.  It makes a bit more sense to use a clinical trial to try to understand adoption of THR than it does to use a particle accelerator to do it, but not a lot more.

Yet, of course, it is just such innumeracy that appears in the public discourse.  Just as habit leads many people to ignore common sense and insist that train-style displays at airports make sense, “public health” indoctrination also eliminates the common-sense level science that is taught in grade school.  It is reassuring to note that the claims about a particular type of study always being best, or even merely always being needed, are not made by actual scientists.  They always come from political activists or medics, and occasionally from incompetent epidemiologists (not actually a redundant phrase — just close to it).

I think this analysis also extends into dealing with thought-free analogies in regulation, such as “we do X with cigarette regulation and therefore should do it with products that are different in almost every way other than being tobacco” or “we require X for medicines that serve only to eliminate a disease, and therefore should require it for products that people use for enjoyment.”  I will leave that extension as an exercise.

NewZ ecig clinical study, an “I told you so”

by Carl V Phillips

Yesterday I explained why the new clinical trial out of New Zealand should not be touted as important news for e-cigarettes or THR in general.  In addition to the general message that clinical cessation trials are not the right way to study THR products and are just as likely to produce bad results as “good” ones, I pointed out a few particular issues.  First, it was damningly faint praise, claiming that e-cigarettes perform just barely better than nicotine patches, which grossly misrepresents everything we know about their effectiveness.  Additionally, with a plausible different level of luck (random sampling error) that study would have “shown” that e-cigarettes are less effective than patches.  Of course, such a result would have been no more informative about e-cigarettes than the “good” result was, but that is the point.

Sure enough, no sooner had I finished writing my analysis when anti-THR liar Stanton Glantz pretty much made my point for me.  In a post on his pseudo-blog (not really a blog because he censors any critical discussion) Glantz claimed that the study

found no difference in 6 month quit rates among the three groups.

And in a hilarious bit of “do as I say, not as I do”, opined,

Hopefully this study will get ecig promoters to stop claiming that ecigs are better than NRT for quitting.

Of course, the study showed that e-cigarettes did a bit better.  Glantz probably thinks this bald lie is justified by a common misinterpretation of statistics, wherein different numbers that are not statistically significantly different are incorrectly called “the same”.  Anyone with a 21st century understanding of epidemiology knows that this is not the right thing to say, but since Glantz’s paltry understanding of the science seems to be based on two classes he took three decades ago, perhaps this is simple innumeracy and not a lie.

Still, he has a point about the numbers not being very dramatic.  The real lie (and a case of innumeracy much worse than using incorrect terminology) is suggesting that this one little flawed artificial study somehow trumps the vast knowledge we have from better sources.  It is quite funny that he, who has made a career out of ignoring evidence, suggests that everyone else should pay attention to this “evidence” and change their behavior.  Not so funny is my role as Cassandra:  If we start touting misleading studies like this one as being great news when they happen to go our way, it is pretty much guaranteed to hurt us rather than help us.

(Glantz goes on to post some utter drivel about the nature of RCTs and what previous evidence shows about e-cigarettes, which I have debunked before and will not bother with here.  After a few decades, you learn to not try to fix every little flaw in a particularly slow student’s writings.)

Of course, Glantz does not have the skills to figure out that this study is flawed.  But he might have had some hope had he actually read it.  Or the press release.  Or even one of the news stories.  Instead, it is appears that he just heard some garbled sentence or two about it and wrote his post based on that.  How can we know that?  Because when his post first appeared (screenshot below), it described the comparison as between nicotine gum and e-cigarettes, even though someone who actually spent three minutes studying the material would not have made that mistake.

1st try

Oops. That’s what happens when you don’t do the reading.

Notice that in both the headline and the first sentence he describes the study as using nicotine gum.  Oh, but wait, it gets better.  A few hours later, he changed the first sentence (see screenshot below).  Of course, being who he is, he did not include any sort of statement of correction as an honest researcher or reporter would.  (Quietly fixing a grammar typo or garbled sentence is no big deal — I do that — but when you actually told your readers something wrong and then you try to memory-hole that, rather than actually noting you are making a correction, it is yet another layer of lying.)

2nd try

And this is what happens when you don’t know how to operate your software.

Notice now the first sentence is changed but the headline is still the same.  Did he just not realize he needed to fix that too, or did he have no idea how to change a title on his blog and was desperately calling tech support to try to get them to help hide his error.  Apparently tech support came through, though, because the version you will see if you follow the above link has memory-holed the evidence suggesting he did not even read the study (though you will notice that the link I gave still has “gum” in the URL, but now redirects to the new page where the URL has “patch” in it).

So that is all quite hilarious.  But don’t let it distract you from the main message.  We need to focus on the real sources of knowledge about THR and not buy into a research paradigm that is — often literally — designed to hide THR’s clear successes and benefits.  When e-cigarette advocates embrace studies with bad methods and misleading results (even if they seem to be “good” results), rather than objecting to the bad approach, it hurts the cause.  In this case, even the “good” study can be spun against the truth about THR.

Ecigs = patches?? More largely-uninformative research, but at least this time it works out

by Carl V Phillips

As you have probably already heard, a new clinical study reported in the Lancet found that in the artificial context of a smoking-cessation clinic — using the very odd population of people who go to such clinics — when people were offered an inadequate [Update: possibly inadequate — see * footnote] quantity of low-quality e-cigarettes, they became abstinent from smoking (perhaps temporarily) at about the same rate as those offered nicotine patches.  Woo hoo.

[*I have now seen it reported by a study author that participants had access to as large a quantity as they wanted (gee, you think they might have bothered to mention that in the study methods) despite the observation that participants consumed less than is usually an adequate as a substitute for someone just quitting smoking.  Perhaps they were just not properly advised about how much to try to use or they were scared to use much, given that they are banned where the study took place.  Whatever the reason for it, inadequate quantity of consumption probably reduced the rate of switching.]

My title for this post is way too long, but I still had many other phrases or thoughts I wanted to fit into it, including:

damning with faint praise

social phenomena cannot be effectively studied in a clinic

designed to fail (presumably unintentionally), though not quite enough to manage to fail

and, perhaps most important,

ivory-tower researchers seem to think that a well-established fact, something that everyone who is paying attention to all of the evidence already knows, is not true until they can show it in one of their artificial experiments (or, as the joke goes, “ok, that works in the real world, but let’s see if it actually works in theory”)

I realize that some e-cigarette advocates have embraced it as good and exciting news, but I would suggest not getting too excited, for a few reasons.  First, this study does not actually address the real-world phenomenon of THR.  Real THR does not consist of shoving one particular option into the hands of people and saying “do this rather than smoking”.  Even when that particular option might be ideal for some people (which seems not to be true in this case), it is not ideal for everyone.  Imagine a hypothetical world in which one kind of e-cigarette was more satisfying than smoking for 10% of smokers, another kind for a different 10%, and snus for another 10% (and that no one else liked those products at all).  In that scenario, we can help 30% of all smokers quit and also be happier for having done so.  But any trial that tried to force one of those on people would show that it fails 90% of the time.

Second, the results were within the margin of statistical error from the news media being blanketed with a report that said “new study shows e-cigarettes do not work as well as those wonderful ‘approved’ nicotine patches, so there is no reason to allow them on the market.”  In case the enormous importance of that little bit of random good luck is not clear, let me explain.  In the study, e-cigarettes did a bit better than patches, but the difference was less than “statistically significant”, which basically means “the fact that e-cigarettes did a bit better rather than a bit worse is quite conceivably just due to luck of the draw; a repetition of the exact same study might well reverse the order.”

So, a little bit of different random noise in their results, and e-cigarettes would have performed a bit below patches rather than a bit above them in terms of causing smoking abstinence.  Had that occurred, this study would be making headlines as yet another reason to ban e-cigarettes.  And instead of the press embargo being released after business hours on Friday (you are probably not aware, but the journal employed that classic tactic to minimize press coverage of an announcement) it would have been released on a Tuesday morning so it could headline all the health news sections that week.

As I noted in the title, this time it worked out.  But it next time it might not.

And “worked out” is pretty faint praise in this case.  Nicotine patches are a fairly worthless product.  They work well for some never-smokers I know who use them for performance enhancement; they are pretty good for that because they deliver a constant dosage of nicotine, making them kind of like sipping coffee all day, which is perfect for some people.  But history tells us they are pretty useless for quitting smoking (and you do not need any clinical trials to tell you that:  in the USA, they have existed, they have been heavily touted, and yet smoking rates have basically just tracked the uptake of low-risk alternatives — enough said).  Indeed, the claim that e-cigarettes merely worked just as well as patches (or even a little better) is flatly contrary to everything we know about e-cigarettes and how well they work.  This story does not provide further evidence that e-cigarettes work; it implies that they do not work as well as we know they do!

Instead of interpreting the study as important news, it is more useful to view the interpretations of the results as misguided science.  That avoids the problem of buying into a bad scientific paradigm that is ultimately bad for THR.  This is exactly the junk-science interpretation of what constitutes evidence that has been used to deny the overwhelming evidence about THR for the last decade.  Yes, it is nice to be able to respond to those who play this game by saying “ha, we have a study too” — but it is just one and it is a pretty weak result.  Better to focus on fighting the ANTZ’s repeated denial that other evidence is what is useful, which becomes harder if we implicitly endorse the denial when it is convenient.

Clinical trials are simply not a useful way to evaluate whether a consumer product is attractive to consumers for the reasons cited above: the artificial setting, the unrepresentative people, and the inevitable limited range of options.  Disappearance data alone (the quantity of the product sold to consumers) tells us far more than any clinical trial ever could.  Using clinical trials where they are not useful — let alone claiming that they are more useful than the better evidence — is mistaking their tools for the goals.  The fetishizing of the tool of clinical trials (useful for some things but not everything) reminds me two-year-old with a toy wrench playing “fix it”:  “wrenches are used to fix things, I am applying my wrench to an object, therefore I am fixing it.”

People who might be happy with a nicotine patch are not the target for e-cigarettes.  Even less so are those who go to a clinic looking for some magic bullet that will make them not want to smoke (see my series on second-order preferences to understand what they really want and why they are never going to get it).  E-cigarettes work best for the large portion of smokers who have become comfortable with (or resigned to) the fact that they want to keep smoking — or, of course, to do something that is a fully-satisfying substitute.

And all this is to say nothing of the fact that the study report makes clear that the smokers were not given nearly enough e-cigarettes to provide an acceptable substitute (thus intentionally and inappropriately imitating the inadequacy of the patch) and the products were of such low quality that they kept failing.  That is, the study was not a very good picture of what would happen in a clinical setting if you were really trying to get people to switch to e-cigarettes.

In fairness to the authors of this study, no data is worthless if interpreted correctly.  Better to have something rather than nothing.  But that is a big “if”.  The honest interpretation of this study should have been,

We know that e-cigarettes are proving to be a popular and effective method for quitting smoking in the real world and that no serious short-term side-effects have been found based on millions of observations.  We do not know specifically how well e-cigarettes would work in clinical smoking-cessation setting, though the reasonable hypothesis would be “they would work better than the current practice”.  This study confirms, as we already had every reason to believe, that even with really lousy products, e-cigarettes are better liked than nicotine patches.  This suggests (again, as we already pretty much knew) that clinics that really want people to quit smoking should start offering e-cigarettes.

I hate to give the authors a hard time, because they were just doing their jobs as institutionally-constrained researchers (“must use hammer, so call everything a nail”), and were being vaguely pro-THR (though not so much as to risk offending the tobacco control industry, of course).  Most of the blatant lies about this study are concentrated in the press release (and thus in the news reports) which the study authors did not write.  However, if they had veto power over the content, as is likely the case, they share the blame.

The press release tried to portray this result as important and groundbreaking.  Consider the following excerpts from it:

First trial to compare e-cigarettes with nicotine patches…”

Ok, fine, it is the first of those.  Yawn.

…only the second controlled trial to be published which evaluates e-cigarettes, and is the first ever trial to assess whether e-cigarettes are more or less effective than an established smoking cessation aid, nicotine patches, in helping smokers to quit.”

I guess there are some bit of literal truth to be found there, but the overall message is very misleading.  The reader is led to believe that this study tells us something new by conveniently ignoring the absolutely enormous quantity of evidence we have from sources other than trials.  They might as well be saying “this is the first research done in New Zealand on this topic”.

It is also false that the previous clinical trial (presumably referring to the one by Polosa’s group, which found a lot of smokers who were not seeking to quit spontaneously switched to e-cigarettes), for all of its limitations, did not show e-cigarettes worked better than NRT products.  Polosa’s result clearly demonstrated that e-cigarettes work better because we already knew how poorly NRT works.  It did not matter than there was no comparison within the study — you do not have to show them both on the same map to conclude that New Zealand is further away from you than your corner pizza place, after all.

“Our study establishes a critical benchmark for e-cigarette performance compared to nicotine patches and placebo e-cigarettes…”

Nope.  There is nothing critical about this result at all.  As a benchmark it might have some value, telling us that even when you seem to be trying to make e-cigarettes fail in that setting, they still do better than NRT.  And, of course, the concept of a “placebo e-cigarette” (which they called the zero-nic e-cigarettes that some subjects were assigned to) is silly; the benefits of an e-cigarette to someone trying to switch from smoking are not limited to the nicotine, and so there can be placebo nicotine but there is no such thing as a placebo e-cigarette.  (Aside: when Polosa’s study came out, those who fetishize drug trial methodology attacked him for not including a placebo group, but merely nicotine and non-nicotine e-cigarettes.  It will be interesting to see if they say the same when the study was done by their own people.)

The study is also the first to evaluate whether there are any adverse health effects associated with using e-cigarettes in a large (300+) group of people, and in real life, rather than a laboratory, situation.

Um, yeah, except for the slightly larger population of several million people who have used e-cigarettes in real real life.  (Note, all the commas in that quote are in the original — I just wanted to point that out to my editors who complain that I use too many commas.)

There is one useful bit of information in the study, though it is pretty buried:  The subjects who were assigned to e-cigarettes (either with or without nicotine) were enormously more likely to recommend them to other smokers than those assigned the patch were to recommend that.  No shock there, obviously.  But it turns out that we know relatively little about exactly how the social marketing of e-cigarettes plays out.  Unlike the rest of the results (which are mere weak confirmations of what we already knew) this could be useful new knowledge.

Bottom line:  The ivory-tower types need to do arcane artificial studies like this in order to advance their careers.  Health science journals need to publish and tout them in order to try to claim that they are the source of knowledge and so people should buy what they are selling at an enormous profit.  This does not mean that those of us who are interested in the truth should fall for their marketing.  Much like the cigarette companies, they are trying to sell a product that has some benefits, but in this case is ultimately a poor choice compared to alternative methods of inquiry.

Sadly, all but a small handful of the ivory-tower types refuse to soil their hands by actually getting to know real people, THR product users.  They use them as study subjects, but they never talk to them, let alone read their blogs and Facebook posts.  If they did, they would not overstate the value of studies like this.  The real science about what is happening in the world definitively demonstrates the value and success of THR.  If we put our faith in artificial studies, however, we are just as likely to get results that contradict what we know as support it.