Editors of Tobacco Control attack blogs: protecting science from cranks, or activism from science?

 

by Roberto A Sussman

[Editor’s Note: This post is the third here on the recent Tobacco Control editorial. The first two, by me, are here and here. This guest post was inspired by a comment Dr. Sussman left on one of the previous posts. His outsider perspective, from physics, offers insight that may not be apparent to those of us mired in social science and health debates, and he provides a deeper dive into the stated policies of the “journal” than anyone else has done. –CVP]

In a recent statement of editorial policy the editors of the journal Tobacco Control declared that the journal’s “Rapid Response” section will be henceforth the only legitimate space to express a scientific critique of articles published by the journal. In particular, the editors singled out (unnamed) internet bloggers as illegitimate critics.

This editorial policy reads as an unnecessarily harsh and defensive reaction, as scientific debate in all fields has never been narrowly confined to peer-reviewed journals, and more so in the current age of broad internet usage and social media. Moderated internet sites (such as the Los Alamos National Laboratory LANL arXiv site) have become a regular and very handy communication channel in physical and mathematical sciences and are fully as serious as journals; researchers can upload material not yet published in a journal (under review), or not intended to be published in a journal, to induce an open discussion of fresh (even controversial or unorthodox) ideas without the constraints of the formal review process. Blogs and Facebook pages exist in all disciplines that serve as useful complementary spaces where research issues can be discussed either informally, or with varying degrees of rigor, mostly involving scientists and graduate students, but also educated non-scientists that may be interested. Besides all these points, publication in peer-reviewed journals is not a guarantee of solid or good quality research, as many peer-reviewed articles in “official” journals report false, methodologically inconsistent, or dishonest results.

However, some forms of “unofficial” critique are neither valuable nor useful. Scientists make an effort to avoid and exclude cranks and crackpots voicing (mostly in social media) all sorts of critical opinions on various scientific topics (especially politically controversial ones). Typically, these characters cleverly juggle (out of context) technical terminology to produce theoretical constructions that may fool lay persons, but are easily seen as incoherent nonsense by any professional researcher (or even a competent undergraduate student). As a common feature they deflect criticism by invoking conspiracies directed by some “scientific establishment” bent on silencing them. As a professional scientist (specialized in theoretical astrophysics and cosmology), I can recall very frustrating experiences involving encounters with this type of non-scientific critics. I have also engaged creationists and “UFO-logists” in front of non-scientific audiences, and have learned the hard way that debating scientific issues requires proper rules of engagement and proper spaces (which does not exclude blogs). Without the appropriate environment and moderation, scientific arguments (even if expressed in non-technical manner) cannot compete with “punchlines” or quick soundbites and analogies.

Medical sciences are not immune to science trolling, as can be witnessed by the efforts of groups like ACSH (American Council of Science and Health) to expose all sorts of doubtful health claims promoted by fad peddlers and cranks writing in social media. This type of science trolling about medical issues has more direct and significant social impact and consequences than in physics. Statements promoting well-being, or warning against terrible ills that would follow automatically from some diet or substance consumption or from adopting a new habit, have an immediate practical impact for those accepting them as true or plausible. We have fallacies potentially producing immediate behavior patterns. By contrast, a cranky statement from physics trolling, such as “a black hole emerging from the Large Hadron Collider (LHC) may cause a great planetary catastrophe”, sounds distant and abstract even to those understanding or believing it. After all, whether one believes it or not, there is no practical course of action to prevent the whole earth from being carved out by a massive black hole, but for those believing that diet X cures cancer, adopting and promoting this diet is concrete and doable. The apparent fallaciousness of such a claim (i.e. diet X does not appear to cure cancer) can only be verified by looking at data-based statistics after decades of observation. It is very unlikely that the lay public will follow up the long-term epidemiological studies. As a consequence, large sections of the public may keep believing fallacious health claims (especially if propagated by wide media coverage) and those propagating it are very likely able to get away with it (especially if well connected politically). On the other hand, cranky predictions from physics trolling tend to be rapidly disproven and forgotten: no planetary catastrophe happened when the LHC started functioning.

While the disinformation propagated by science trolling and the peskiness of some social media crackpots are very disturbing, these phenomena can not serve as reasons to decree a strict enclosure of all scientific discourse and debate within the walls of academic journals. Even if we assume that the editors of Tobacco Control (and other scientists) could be legitimately annoyed by cranky “outsider” critics writing in social media and blogs, their editorial is an evident over-reaction. Normally, scientific journal editors would not bother expressing a forceful editorial policy based on declaring war on this type of science trolling. The latter is simply and unceremoniously filtered out of the scientific debate without constraining the discussion and critique to strict officialdom.

The key issue is to understand what lies behind this overreaction is to ask: Are the bloggers that annoy the Tobacco Control editors part of the legion of social medial cranks that pester scientists in various disciplines? To answer this question we need to examine the material posted by these bloggers. If this material is worthless inconsistent nonsense disguised as technical criticism, then the Tobacco Control editors may have a point (even if they exaggerate). But if this material is valuable and methodologically sound criticism, then the defensive reaction from the editors would likely follow from their inability to disprove them within the rules of scientific debate. To address these questions we also need to understand the specifics of the Tobacco Control journal and the research it publishes, as well as the motivations and backgrounds of the critical bloggers and the material they post.

To the external eye, Tobacco Control looks like an ordinary scientific journal: it has an editorial board of professors; its contributors are PhD’s and other credentialed researchers working (mostly) in academic or government environments, receiving public and industry (pharmaceutical) grants; it undertakes a formal peer-reviewing process; it includes a rapid comments section; etc. This looks like any journal in other disciplines.

However, this resemblance is a deceptive illusion based on common external markings and trappings. Tobacco Control is not a proper scientific research journal that serves a real academic community. It is a journal for a loose alliance of academics and regulators (mostly, physicians, lawyers and other non-scientists) whose main task is to advocate and promote a specific tobacco regulation policy with the aim of eradicating tobacco and nicotine usage.

The advancement of the policy strategy is paramount for the journal and is not open to debate, with the “science” part and related technical aspects in the research it publishes being strictly confined to tactical issues subservient to their potential utility in this advocacy. This characterization requires no secret knowledge. A glance at the recommendations to prospective authors of articles to be published by the journal clearly and openly states its research orientation and strict priority:

The principal concern of Tobacco Control is to provide a forum for research, analysis, commentary, and debate on policies, programmes, and strategies that are likely to further the objectives of a comprehensive tobacco control policy. In papers submitted for review the introduction should indicate why the research reported or issues discussed are important in terms of controlling tobacco use, and the discussion section should include an analysis of how the research reported contributes to tobacco control objectives.

In fact, prospective authors are explicitly discouraged from submitting articles which may contain potentially valuable scientific material but have no direct effect on the advancement of the core policy strategy. From their list of papers they are not interested in:

Papers that show the authors have never opened Tobacco Control and do not understand its primary focus on tobacco control rather than on tobacco and its use and health consequences. We are interested in such papers, but only if their authors address the implications of their findings for tobacco control.

While it may be argued that most research is (or could be) connected to some type of social activism that could have some public policy implications or to other type of social or political “extra-science” concerns, no journal I know of in other disciplines (not even in the politically contentious climate change issue) functions with such a strict focus and dependency on advocacy and a specific political agenda. This renders Tobacco Control primarily an activist broadside that acts as a travesty of a science journal.

To illustrate how the science part of Tobacco Control is just skin coverage to a particular advocacy position, we need to examine what lies beneath this skin. I elaborate below on this issue.

Practically all the published articles in Tobacco Control present research that fully complies and completely agrees with the elements that justify the regulatory agenda that defines the journal. This lack of disagreement on core technical issues signals a sort of inbuilt monolithic alignment that one expects to find among echo chambers of political activists or dogmatic sects, but is quite suspicious and uncommon in all fields of science where dissent on core issues occurs and is voiced (of course, I do not mean crackpot dissent, but dissent within the rules and bounds of scientific activity).

Perhaps editors or contributors of Tobacco Control might argue that this unanimity is justified because the “hard science” behind their strategic policy “has been settled”, and thus disputing the policy would imply a “flat earth attitude” based on questioning well established rock solid scientific research. However, this is a clear fallacy: there is no factual basis in the assumption that health science has fully resolved all tobacco related issues and thus has become cast in stone. There is strong evidence (epidemiological and physiological) on high health risks and hazards from primary cigarette smoking, but many open problems still remain to be researched, and evidence is weak or even contradictory (i.e. science is far from “settled”) on other related issues, such as health risks from environmental tobacco smoke (ETS) or from other tobacco and nicotine delivery products (smokeless tobacco or electronic cigarettes). These issues, especially harm from ETS exposure, remain controversial, and thus must be open to debate. A rigid set of policy recommendations on these issues has questionable scientific basis. The unanimity on core issues proclaimed by the Tobacco Control journal bears much more resemblance to “toeing the party line” in a political or ideological agenda than endorsing science.

Another issue that reveals the skin depth of the scientific part of Tobacco Control is the technical sloppiness (and in some case outright methodologically fatal flaws) of many articles published in the journal. Some might think that it is necessary to be a trained health professional to properly appreciate and evaluate the technical aspects of medical research on tobacco that could justify a regulatory policy. This is not so. While expert analysis of clinical issues and diagnosis and treatment might require medical or health science training, most articles published in Tobacco Control rely on results of epidemiological research that can be well understood (at a core level) by any professional possessing a decent training in statistics and some knowledge of social science methodology. Also, professionals with a decent knowledge of the physics and chemistry of gases and aerosols can similarly evaluate issues related to putative harms from ETS and e-cigarette vapor.

There are many examples of methodologically deficient articles published by Tobacco Control. In particular, I cite two studies published recently that contain fatal flaws:  (i) a 2016 study claiming to have detected a 11% decrease in heart attacks in Sao Paulo, Brazil, immediately after the enactment of a city-wide smoking ban on bars and restaurants and (ii) a study claiming that usage of e-cigarettes is a “gateway” to smoking among high school students in the USA. In both cases the data was handled very sloppily and the results blatantly contradict available evidence. Nevertheless, they got published, which implies that either: (a) the editors and peer reviewers were utterly incompetent, or (b) that technical quality and methodological consistency are secondary concerns when the prospective articles are deemed by the editors to provide a significant contribution to the journal’s main concern: the regulatory agenda. In fact, (a) and (b) above are not necessarily mutually exclusive.

Articles dealing with tobacco/nicotine issues with similar themes and fatal methodological flaws have appeared in other journals. The Sao Paulo study is a sort of sequel to the famous “Helena miracle” study published in the BMJ flagship journal (same publisher as Tobacco Control), which has been widely criticised and debunked (example), whereas the study on teenage vaping fits the pattern of another study published in Lancet Respiratory Medicine, which was also heavily criticised (example). Both of these studies are co-authored by known anti-tobacco activist and prolific contributor to research on ETS and tobacco issues in medical journals, Prof Stanton Glantz (the Truth Initiative Distinguished Professor at The Center for Tobacco Control Research and Education at UCSF). These patterns clearly illustrate the fact that the advancement of the regulatory policy as a paramount concern that even supersedes quality control in methodological consistency, is not confined to the Tobacco Control journal, but extends to the whole cabal formed by the vast majority of public health researchers publishing in journals articles that deal with tobacco issues that may have implications in regulation policies.

It can be argued that technical flaws, such as sloppiness in handling data and statistical hodgepodge to obtain outcomes favouring funders’ preferred conclusions, are not confined to Tobacco Control and similar journals involved in researching tobacco/nicotine issues, but are common drawbacks in other disciplines as well (especially in various branches of health sciences).  However, the credibility of scientific research is undermined even more when, besides these drawbacks, journals (such as Tobacco Control) themselves gauge and evaluate research results by their utility for advocating a specific regulatory policy. Since the latter is endorsed and implemented globally at the highest bureaucratic and government levels, authors of such flawed studies are basically free from scrutiny and are thus more than willing to publish any research that favours their advocacy even if it contains extremely misleading and false results.

Articles that exhibit this type of scandalous level of faulty methodology would never be published in my research area. This does not imply that erroneous or false (or even fraudulent) results are never published by physics journals. But once proven wrong or debunked, the authors and journals acknowledge the faults. Two years ago data the BICEPS2 observations seemed to have found a weak signal providing indirect evidence of tensor modes associated to gravitational waves that could have been produced during cosmic inflation. If verified, this signal would have been the first empiric proof of the inflationary hypothesis and a strong indication for the existence of gravitational waves (thus further corroborating General Relativity theory). However, it turned out that the handling of the BICEPS2 data had been sloppy, that the data was corrupted by Milky Way dust, whose noise completely buried the detected weak signal. In contrast with medical journals refusing to withdraw health claims on tobacco/nicotine related issues that were later debunked, the BICEPS2 claim was immediately withdrawn by all involved researchers and journals.

Now, what about the bloggers that the editors of Tobacco Control wish to excommunicate? Are they science trolls? The social media blogs that criticize articles appearing in the Tobacco Control journal (and similar journals) are quite diverse, with perhaps their single common feature being their opposition to the type of tobacco and e-cigarette regulation that is aggressively advocated by these journals.

Some of the blogs represent the vaping community and some claim to speak for smokers and vapers. Others are more broadly libertarian. Some of them argue the case for the tobacco harm reduction (THR) approach, even intensively promoting vaping or smokeless tobacco as a substitute for cigarette smoking, while others adopt a pragmatic approach that supports THR without campaigning against combustible tobacco. Some of these blogs are scholarly defenders of science. Some are not scholarly, but aim to provide a voice for a community of smokers, smokeless tobacco users, and vapers that actually enjoy using the products and feel personally affected by the social stigma produced by the intrusive bans that follow from the policy recommendations.

These bloggers, as well as most readers commenting on their posts, may be critical but are not on denial of the health risks from smoking, particularly cigarette smoking. As far as I can tell, very few of the bloggers and readers  advocate the return to the old days when smoking was almost unregulated and allowed everywhere. Instead, all  bloggers and readers express a generalised desire for a more humane regulation of tobacco smoking (and now of vaping), with the right of nonsmokers to smoke-free environments being respected, but also demanding that smokers (and vapers) must be  able to enjoy public indoor spaces where they can smoke/vape without being shamed and vilified by “denormalization” policies. Bloggers and readers comment how such policies are promoted by a global conjunction of increasingly authoritarian public health lobbies and charities, whose aims are perceived to lie far from a genuine public health concern, and are more about the preservation of their bureaucratic power (the “gravy train”), with many of them having intimate financial ties with the pharmaceutical industry.

Some of the blogs are quite scholarly (some are run by experienced scientists) and do provide, together with useful verifiable information, a solid reasoned criticism of the loose methodology prevalent in the research published in Tobacco Control and other health journals. In fact, all the methodological flaws I mentioned before, the faulty meta-analysis — the statistic hodgepodge, the “Helena miracle” claims, the mishandling of the data, the simplistic “addiction” theory, the dismissal of previous results not aligning with the agenda — have been extensively and rigorously discussed in the pages of these scholarly blogs. While most blogs (even the scholarly ones) tend to avoid the dry cauterised style full of technical terms found in published journals, favoring a more colloquial, but well-articulated style amenable for an open and broad audience, a lot of the material appearing in the scholarly blogs could easily meet (after some editing and style changes) the methodological standard of quality that merits publication in a scientific journal.

These scholarly bloggers actually provide a very fresh and healthy counterbalance to the “official” tobacco/nicotine research published in academic research, which is excessively constrained by global public health politics and by the vested interests of the pharmaceutical industry. In particular, they promote varied proposals of a new regulatory paradigm based on THR to replace the policies trying to enforce the “abstinence only” approach. While the bloggers are certainly not beyond criticism (and some may tend to become too self-centered and too defensive), they are absolutely not (not even remotely) comparable to crackpots or science trolls. In fact, these bloggers provide the necessary and refreshing debate and exchange of ideas that could prevent the science on tobacco/nicotine issues from becoming practically indistinguishable from quasi-religious dogma.

Controversy on core issues and the challenging of dominant paradigms occur naturally  in every scientific discipline: there is no reason why this should not occur in public health science. In fact, part of the community of public health scientists has resonated with the criticism expressed by the scholarly bloggers, agreeing (with various degrees of consistency and conviction) with them on various proposals for shifting regulatory policies towards a THR approach. To claim (as a lot of official tobacco scientists do) that all this wide spectrum of voices criticising the dominant politics are mere fronts of the maligned tobacco industry is a ridiculous libel that can easily be disproved.

It is clear beyond doubt that the harsh defensive reaction of the editors of the Tobacco Control journal stems from their inability to acknowledge serious technical errors that the bloggers they would like to excommunicate have spotted. These editors are exploiting the fact that, externally, their niche (a journal whose editors and contributors are credentialed academics) resembles the niche of other scientific journals, while the bloggers (even if posting valuable material) are outside these “official” channels. The hope of the Tobacco Control editors is to secure, by association, the professional authority of journals in other sciences and that this will help them to deflect the bloggers’ criticism.

The tactic of the editors of the Tobacco Control journal is then evident: to identify all their critics, but especially those writing in scholarly blogs, with the social media crackpots that besiege scientists in other disciplines. Their editorial is an attempt to utilize their external resemblance to a real research journal, serving real academic communities, for this purpose. Their target audiences are: first, the media, the politicians and the medical community who can implement the policies they advocate; second, the public health authorities and other academic communities (which would identify with them because of the superficial resemblance); and third, the lay people, who are completely unaware of the inner workings of scientific activity and simply assume that somebody like Prof Stanton Glantz, a co-author of the fraudulent “Helena miracle” (to use a well-known example), is as good a scientist as any other.

It goes without saying that the dominant majority of public health researchers involved in tobacco/nicotine research are acting with gross dishonesty when they paint themselves as bona fide scientists besieged by social media cranks or “Big Tobacco” front. Neither Prof Glantz nor any other prominent individual in this cabal has ever disavowed the most extreme pieces of tobacco junk science published in journals — for example, the claim that minutes of outdoor exposure to ETS produce coronary disease, or the existence of “third hand smoke” (health harms somehow resulting  from tobacco smoke residue in rugs and walls where someone smoked). The claims from such pieces of published third-rate junk science are at the same level of science trolling as the writings of cranks in social media. There is little difference between the “third hand smoke” claim, which treats tobacco smoke as a sort of quasi-magical substance that is lethal even in extremely minute dosage, and quasi-witchcraft statements by a social media freak naturist sect announcing that wearing a pyramidal magnetic amulet around the neck protects from cancer. Yet the naturist sects do not claim patronage from science, whereas this type of officially published ultra-junk science does. For this reason, the latter is much more harmful socially than the former.  

The identification of Tobacco Control critics with crackpots besieging scientists may backfire, as it can easily be shown to be false simply by reading through the pages of the scholarly blogs and comparing with the pages of the journal that they criticize. Not even the non-scholarly blogs and their readers can be tagged as trolls, as (in general) they avoid the extreme abuse seen among social media trolls. In fact, anybody having tried to debate extreme or neurotic anti-smokers (whether laypersons or physicians) rapidly discovers that expressing any doubt or nuance on the usual soundbites, such as “second hand smoking kills” or “you have no right to force your filthy habit on me”, or various forms of “protect the children” demagogy, is met by ad hominem, angry denials and abusive language. A large minority of anti-smokers in all walks of life are very prejudiced individuals whose attitudes to smokers are no different from attitudes of racists and homophobes towards their hate targets. In fact, anonymous anti-smokers in social media exhibit all the unpleasant features of internet trolls and crackpots: dogmatic belief in possessing absolute truths together with invoking conspiracies (the tobacco industry luring “kids” to become nicotine addicts). Unfortunately, some academics that publish on tobacco issues in official journals espouse the same type of cranky troll-level ideas, just expressed in polite technical terms.

Evidently, the editors of the Tobacco Control journal are trying to mobilise the medical-political bureaucracy and charities that share their anti-tobacco/anti-nicotine advocacy. The aim attached to their recent editorial is to pin all its critics (especially scholarly blogs) with the crackpot label, as the old “tobacco industry mole” label is no longer credible. They may succeed, but nevertheless, the label is deceptive. Sooner or later most people will realise it and admit that “the king is naked”.

30 responses to “Editors of Tobacco Control attack blogs: protecting science from cranks, or activism from science?

  1. Is it bad if I want Dr. Sussman to also write guest columns on astrophysics?

    • Carl V Phillips

      I am sure he would be glad to point you to something. But, yeah, I don’t think I am going to publish it here. Sorry.

    • Roberto Sussman

      Hi Nate, thanks for your interest. I did a 3 hour series of video lectures on cosmology for educated non-specialists. I am planning to upload it in You Tube. It is in Spanish, so I am contemplating translating them in English to get a broader diffusion. You will find them easily by googling in about 1 month time.

      • natepickering

        Thanks for the heads up. My Spanish is passable enough that my enjoyment wouldn’t be hindered all that much, but subtitles would probably be good for technical jargon.

        In any event, I’ll be on the lookout.

  2. Daniel hammond

    Unfortunately epidemiology is just perpetual junk science until toxicology is brought in to prove or disprove a chance happening!

    So far anti smoking science and risk statistics are junk science even when they claim they considered all confounding entities.

    Relative risk is nothing

    Absolute risk maybe

    OSHA creating PELs based upon factual toxicology
    Is the basis for the federal courts reference manual on scientific evidence and these risk studies are the purest form of advocacy driven trash that has destroyed epidemiology as a so called science!

    It’s time end point proof is demanded before any
    One can make claims for legislative laws!

    That keeps madness taking over the debate and depriving people of their rights and business from their livelihoods!

    • Carl V Phillips

      Um, actually you have that backward. Epidemiology is the science that estimates the actual health effects of an exposure. It is (by definition, really) the only way to know if there is an effect.

      The toxicology you are thinking of basically consists of poisoning rats and mice, figuring out how little of the poison leaves them ok, and then dividing that dosage by 100 and declaring that is ok. Why 100? Because we have ten fingers (not because of any real evidence). Why rats and mice? Because they are cheap and people let you torture them (not because they react to carcinogens and toxins similar to we do, or even similar to each other — they don’t).

      Also, there is no such thing as proof. Relative risk is just a way of expressing a quantity that could also be expressed as absolute risk. I could go on.

    • Roberto Sussman

      There is a widespread misconception that statistics is “not scientific” and that if proof of something is through some form of statistical inference then that something has not been proved. Science doesn’t work this way and this misconception is a reaction to the fact that statistics can also be misused or abused to justify or claim fallacies. However, not all usage of statistics is fraudulent. In particular, the existence of fallacious claims in some tobacco/nicotine epidemiology obtained by dishonest usage of statistics does not imply that all epidemiological research is bogus for relying on statistic inference.

      Statistical inference is important in my research area. The dominant paradigm in cosmology is the “concordance model”, which emerges as best fit theoretical thorough statistical inference (specially likelihood evaluations) of complicated data from large distance observations brought by satellites (nature is not as simple as we would like it to be). If you look at graphs of luminosity vs red shift from supernovae it seems like a sort of clustering of points that does not seem to favor a given theoretical model over a competing one. Once you apply relatively simple statistics you see a class of models being clearly disfavored. Other observations require more complicated statistics. The preference for the concordance model is not based a single set of observations (supernovae) but the conjunction and contrast of many observations, in this process statistics is a major tool. Of course, the concordance model is not free from criticism, but the alternative models must fit the data and checking this requires statistical inference.

      • Roberto Thank you !

        Whats your take on the study that is the basis or this newspaper report ?:

        “A landmark study has con­cluded that the skewing of results by small studies — and the implicit bias such studies introduce — is often much larger than the margin researchers cite as proof of statistical significance.

        The study, published last week in the journal PNAS, reviewed more than 3000 meta-analyses covering almost 50,000 papers across all 22 scientific disciplines. It found that small studies, whose modest sample sizes tended to produce low-precision results, were by far the biggest source of bias.

        On average, the “small study effect” accounted for 27 per cent of the variation in reported effect sizes on any given topic — a figure described as “astronomical” by long-time scientific integrity campaigner John Ioannidis, who led the study.

        Professor Ioannidis’s team also investigated other sources of potential bias, including long-distance collaborations, pressure to publish, a supposed tendency of US researchers to over-estimate effect sizes, and the lack of peer review in the so-called “grey literature”, such as conference proceedings and PhD theses.

        The study concluded that, on average, these types of factors were responsible for 1.2 per cent of reported variance in effect sizes, although this was a conservative estimate and diverged widely from discipline to discipline.

        Professor Ioannidis holds professorships in medicine, health policy and statistics at Stanford University. He once demons­trated mathematically that published research conclusions were likelier to be false than true.

        He said a 1.2 per cent skewing of results could be very substantial, with many effects “discovered” in modern research — such as a drug’s impacts, or meat’s contribution to cancer risk — claimed on the basis of variances as small as 0.2 per cent….”

        BTW Carl i imagine that ‘He once demons­trated mathematically that published research conclusions were likelier to be false than true’ might ‘tickle your fancy’

        • Carl V Phillips

          That study is actually wrong. Frankly most of his stuff is just sensationalist crap that is either trivial or wrong. In this case, Gelman pointed out his error, which is a bit more technical (his errors are often quite simple). The analysis assumes that sample size is random, or at least independent of priors about the effect size of interest. But if we (quite reasonably) assume that researchers who are looking for what are believed to be smaller effects only bother if they can have a large n, while those who are looking for something big might just go ahead with a small sample (and this is what would happen if people were doing silly “power calculations” to determine sample size), then you get that same skewing without publication bias. He explains this is a recent blog post.

          As for the “most results are false” claim, that one is crap for more trivial reasons: He did not actually analyze whether results were true or not, but rather whether they would have passed a particular statistical test if done in a particular way. The reality is that ALL study results are wrong because a result is not these dichotomous tests, which are treated as if they are results by naive researchers, but the measurement. The measurement will never be EXACTLY right, and therefore it is wrong. This is actually a case where his stuff is both trivial AND wrong — quite a trick.

        • John Walker

          Carl Roberto
          I thought that might be the case.
          BTW
          Is it possible to do a short layman’s quide to things to look out for – signs that indicate that you should ‘look that scientific study in the mouth’??

        • Carl V Phillips

          Oh, if only. I don’t know anything good. I am not even sure I could write anything good. It is tougher than I make it look. As in, “being able to do something is an order of magnitude easier than being able to tell someone else how to do it.” I’ll think about it.

          In any case, it would probably not cover stuff like Ioannidis’s. His stuff is a unique niche that I would call “cutesy games with statistics.” Not simple misuse of statistics in a way to make a worldly claim that is not warranted (I could probably write a guide to cover the majority of that). Rather, it is a deeper dive into what you can do wrong with rules of thumb that are not quite right.

        • Thanks I guess if it was really that possible to write ‘ the dummy’s guide to spotting BS science’ it would have been done by now.

          My only rule of thumb offering re TC and Public Health;
          My radar goes; Danger Danger ‘loose moral’ approaching, whenever I sense that a spokesperson (for anything) really believes that they are what they, represent.

        • Roberto Sussman

          John, I’m not sufficiently familiar (nor qualified) to make a proper comment on Ioannidis. My intuitive feeling (I may be mistaken) is that his stuff contains a lot of a “statistical curry” hodgepodge based on a lot of unwarranted assumptions. At least in my research area I can categorically say the statement “most (published) results are false” is not true, though (as Carl commented) the veracity of the statement could depend on what is meant by “false”.

          The attempt of Ioannidis and others to utilize some form of scientific methodology to examine the workings of science itself is also related to the more mundane evaluation or measurement of scientific activity (the “metrics” problem) undertaken either by peers or by science bureaucrats. Contemporary science is too big and extensive and requires a lot of resources, so there has to be some minimally logical and consensual protocols and mechanisms, based on demonstrable results, to decide who gets hired, promoted or funded, who gets acclaimed and who gets fired. Looking at “metrics” (articles, citations, “impact parameters”, collaboration networks, etc) provides useful information and is necessary for science to function, but some scientists and some bureaucrats treat the study of metrics as if it was in itself some sort of exact science, that is, as if there was some sort of methodology (employing techniques like statistical inference, neural networks or data mining) that could allow them to measure and evaluate science accurately and even make predictions. I believe this is wrong and misleading. In fact, the obsession on looking at metrics as if it was the data of some exact science is one of the worse deformations of contemporary scientific activity.

          There is a broad consensus on the many problems plaguing contemporary scientific activity: the “publish or perish” attitude, young scientists forced into academic sweat shops, excessive dependence on the fund providers and employers, the “publication bias”, all of which can be aggravated for research subjects with strong political and financial connections (as in the case of tobacco/nicotine science that I commented at large). In my opinion these are complex issues that cannot be fully understood or resolved by some clever “silver bullet scientific” approach. In fact, it seems to me (again I am cautious in saying this) that this is what Ioannidis is trying to do when attempting to look at these problems “scientifically”, using metrics as data to be manipulated statistically (it could be manipulated by other techniques) in order to find some sort of fit to a social or behavioral model (though some times I get the feeling that, more or less, he has a predetermined model when he screams from rooftops that it is all bonkers).

  3. What a fantastic Friday read! Thank you Mr A Sussman.

  4. The critical thing is the lack of control over what ‘academics’, like Glantz, say. I do not mean censorship. I mean very loud shouting. Even voices which point out fatal flaws in methodology are drowned out by repetition and exaggeration.
    The ‘science’ of ‘Epidemiology’ has been ‘hijacked’. It is no longer ‘science’. It is ‘politics’.

    • Roberto Sussman

      HI Junican, as you say, the issue is not “control” or “censorship”, but scrutiny. In fact, this scrutiny should be done by the same scientists, as is done in theoretical physics. String theory was widely acclaimed but is now under heavy criticism, either from science sources or from science media (google “criticism of string theory” or “string theory officially questioned”). The dark matter paradigm and super-symmetry are also under heavy critique. Science is not holly grail but an ongoing process.

      However, neither the existence of dark matter nor super-symmetry has political and social implications and it does not affect global politics or vested interests of state bureaucracies or industries (pharmaceutic and tobacco). It is a whole different ball game for the science that deals tobacco/nicotine issues (specially so for the large part of it that is advocacy disguised as science) . In my opinion, in this science the scrutiny cannot be purely internal (Glantz and minions scrutinized and evaluated by their own cabal would be a political farce). The scrutiny must be internal AND external, and would have to involve representatives of the lay public. It is a slow additive process that may have already started with social media become part of the debate and with part of public health beginning to endorse (perhaps still too timidly) a THR approach to regulation. Obviously, it is a long battle because those opposing his process have a lot to loose.

      • natepickering

        “In fact, this scrutiny should be done by the same scientists, as is done in theoretical physics.”

        This is something that doesn’t get mentioned often, even though it’s hugely important. In an inherently honest enterprise populated by inherently ethical people, intellectual rigor does not need to be imposed externally.

        • Carl V Phillips

          And you give me yet another thing I should mention in an upcoming post. If only I can keep track of it all.

          That would be this: It is like the difference between “Can you find my phone? I think I left it in the kitchen but I just don’t see it.” and The Joker saying “I have hidden a biological weapon somewhere in downtown Gotham. Try to find it before it activates!” In one case you have someone doing their best to do something right and the role of the external party is to provide a reality check against one-off goofs or lacks of understanding. That is the same as the role of a professor looking at a student’s paper. There is some professional tension, of course, but basically the goals are the same. (The classic complaints about reviewers in more honest disciplines tend to be complaints about the individuals being lousy professors. E.g., saying “you did not do this the way I would have, and therefore it is wrong” rather than being able to separate their own preferences and judge something/someone on its/her own merits. It is like an average reader saying “that is good” because it endorses his own feelings about politics or e-cigarettes, rather than assessing whether the analysis and writing is actually good.)

          But the Joker case is a story of the actor trying to get away with bad deeds, and inviting the search only to increase the effects of what he was doing (create more terror; prove his cleverness; claim legitimacy). The searchers are being invited to find something that is intentionally hidden from them. As with the legitimate role of the professor, they do not know exactly what they are looking for. But unlike with the professor, they cannot ask for clarification. Often I write a review that says: there is no way that I (or any other reviewer!) can legitimately review this paper without knowing X, which is not reported; therefore I would like to review a resubmitted version where X is reported. Occasionally that is the entirety of my review. Literally never do the crap journals in public health respond to that by having the authors resubmit so the paper can be properly reviewed. Just let that sink in. They just want to churn out papers or rejection notices, not actually do any work for their exorbitant fees.

          Also, of course, there is a difference between something that is intentionally hidden and something that is legitimately misplaced. Normal intuition works well in the latter case (as it does with unintentional goofs), whereas adversarial detective work is needed for the former. Etc.

          Oh, and there is also the matter of many of the searchers actually working for The Joker rather than the good guys. And a lot of them being blind. But those are another story.

        • Roberto Sussman

          “Often I write a review that says: there is no way that I (or any other reviewer!) can legitimately review this paper without knowing X, which is not reported; therefore I would like to review a resubmitted version where X is reported.”

          Is “X” data? Is this related to the Karolinska Institute refusing to release to you and Dr Rodu crucial data on a snus study?

        • Carl V Phillips

          Reviewers certainly should have access to the data, if the process is supposed to do what it is claimed to do. But, no, I seldom (perhaps never) bother to ask for that. It is not always the same thing, though probably my most common demand is for the authors to provide the results from alternative model runs or (vanishingly unlikely!) positively affirm that they developed the one model and ran only it. The backstory there, of course, is that I suspect that the specific model choices were made to best get the results they want, either for political reasons or simply to get a bigger number. I started pointing out that this was the huge serious problem with most of the epidemiology literature over 15 years ago. (Not silly statistical trivialities — see previous comment about Ioannidis et co. The naive stats people are finally starting to talk about this, btw. They decided to call it “researcher degrees of freedom” which is cute, but a misnomer. They do not seem to have bothered to find the papers I and a couple of others wrote about it long ago (we gave it better names).)

          Oh, but this does relate to the Karolinska lies: The smoking gun there was the same team of researchers running the same dataset looking at various disease outcomes. Each paper (about a single outcome) used different subsets of the data, different covariates, different functional forms for variables, etc. It was blindingly obvious that they were making choices to try to create/inflate the association with snus that they wanted to claim for political reasons. The papers did not even reference the previous papers in the collection, probably to avoid drawing attention to this, let alone acknowledge that they were using completely different models. Of course, they offered no justification for the model choices — they could not.

          Anyway, sometimes the X is the actual wording of a key survey question that the authors characterized but did not report. Often this is quite crucial missing information. Sometimes it is other key bits of methodology — anything from the setting for the data gathering (makes a huge difference for sensitive topics), to the method used to recruit subjects (obviously important), to actual definitions (e.g., of “current e-cigarette user”), etc.

          As I said, sometimes I suspect that there is not really a problem, and just say it should be reported. Sometimes I say that the paper cannot be properly reviewed without it, but give other comments. Sometimes I do not trust the editors or authors and say only that, noting that I have other important comments, but will only offer them if I can properly review the methods. Never, in the latter two cases, was I ever offered the chance to review a proper revision. It is just that bad in public health publishing (and the journals that readers of this blog might like better are no exceptions).

        • Roberto Sussman

          “probably my most common demand is for the authors to provide the results from alternative model runs or (vanishingly unlikely!) positively affirm that they developed the one model and ran only it. ”

          This is a common demand in all peer-reviewing. In a lot of papers that I referee or adjudicate I notice that the authors have examined a given effect or phenomenon (say, dark matter clustering) along a given model without mentioning the existence of alternative physical assumptions (say, on dark matter). Ideally, I should demand the authors to consider various available models and fully compare the predictions, but in practice this would yield a very long “review” (or “state of the art”) paper. These papers are not ordinary papers, they are usually written by invitation and published in special journals like Physics Reports or in special issues of normal journals). Therefore, for normal papers I only ask the authors to include appropriate mention of alternative approaches. By this I mean not only citing and providing one-liners, but at least a brief but well explained discussion of the differences in models and on what this could imply for the study under consideration. While this request does not read as a difficult task, and most authors comply with it, some authors regard it as an unjust imposition, which may even prompt them to request another referee or submit to another journal.

          If, as you say, it is difficult in the public health review process to ask authors to incorporate mention of alternative methods or models, then this peer reviewing is really poor quality in “normal” papers. I know that medical literature also contains comprehensive “state of the art” papers, as for example “Surgeon General” reports, though I imagine these are rather political documents. I have looked at a few “state of the art” papers on smoking, for example, a comprehensive one by S Hecht, published in the JNCI in 1999; the other (a “commentary”) by Vineis et al, published in 2006 also in the JNCI. Hecht’s paper looks more like what I would have expected from a similar “state of the art” paper in physics (though the regulatory policy is also predetermined). However, Vineis et al looked more like the type of advocacy disguised as science that we have criticized. I suspect that the newer a medical article on tobacco/nicotine is the most likely it will be advocacy masquerading as science.

        • Carl V Phillips

          Yes, government documents are basically always political. I talked about that in a recent post, the one with the interview with Lee Johnson. They are generally written by people who would not understand this conversation we are having. It was not always so: The first SG report on smoking brought together some of the brightest minds in epidemiology. But the recent one on ecigs was written by total dullards.

          Regarding alternative models, I think maybe I am talking about something much simpler than you are. If I understand correctly, you are talking about fundamentally different underlying premises that would require substantially different approaches. In public health, including medicine, most research is done without any consideration of premises at all. That is the problem. So when deciding what to put in their statistical model, there are no rules. Include this measure of mental health status in the model or not? Make this variable linear or dichotomize it? What cutpoint to use for the dichotomization? Sometimes the choice for one of these is obviously wrong. But there is a wide plateau of “basically seems reasonable”. So what do they choose? For most of them, whatever model produces the result they like best. So what I am asking for is simply “what would happen if you made other seemingly just-as-good choices about your calculation?” and more specifically “what DID happened when you DID do so?”

          So back to the comparison you introduced: It seems to me that if someone tries out different models of the type you describe on their physics data then: (a) they would probably publish each because it would be a lot of work; and (b) if there was a way in which the data fit one model a lot better than the others, then it would also be offering a good test among a finite pool of competing models.

          But in the epidemiology story I am talking about, none of that is the case: Running these other statistical specifications (perhaps a better term than “model”, which implies something with some gravitas) takes seconds for each one and can be done (is done) by some RA who could not even explain to you what the software she was running and rerunning does (though neither could the professors). That is why they inevitably are trying and throwing out many, perhaps many hundreds of model runs before cherrypicking one to publish. There is never any testing of premises/models in this, though. There is little or no thought about what models to test. For example, the obvious step of publishing what happens if you run the data through the statistical specification (or as close as possible) that previous authors studying the same exposure-outcome combination did, to see if the data further supports their model, is LITERALLY NEVER DONE. That is a way to deal with another problem, which is that there is not nearly enough theory to create more than the barest skeleton of a model (so using someone else’s specification at least prevents cherrypicking). Oh, and also there are not clear focal points for any underlying theory/premise/model like there often is in physics. It is a many dimensional space of both continua and bright-line choices, none of which look a lot better than nearby other ones. It is the difference between focusing a telescope, which has a clear best point in any neighborhood, and adjusting a sensitivity dial on a spring scale, which is probably getting you closer to right if you are doing it, but you will never know if you got it right or overshot.

          I wish I knew how we could write a book about this. We just need to add a Canadian biologist to the team. :-)

  5. Thank you Carl for bringing Dr. Sussman’s work to the series. He proves the point that the object is propagandizing science (and bad science at that).
    Nail meet hammer:
    “However, this resemblance is a deceptive illusion based on common external markings and trappings. Tobacco Control is not a proper scientific research journal that serves a real academic community. It is a journal for a loose alliance of academics and regulators (mostly, physicians, lawyers and other non-scientists) whose main task is to advocate and promote a specific tobacco regulation policy with the aim of eradicating tobacco and nicotine usage.”
    I’ve written to you before on the problem this poses for real public health efforts. That this kind of elision will undermine the credibility of the work of many solid epidemiologists and poses, therefore, a grave threat to real public health going forward. Yet this charade continues not only unabated but is expanding into many other areas of ‘behavioral epidemiology’ vis-a-vis diet, exercise, etc.
    This will not end well. The destruction of true epidemiology will not end there, indeed there is a growing distrust of medicine in general simply by association. The consequences could make the Zombie Apocalypse look like a company picnic.

    • Roberto Sussman

      Indeed, the current usage of junk epidemiology as a fig leaf to justify a (previously fixed indisputable) tobacco/nicotine regulatory policy threatens to undermine the public credibility, not only of behavior epidemiology, but of medical science as a whole. I have tried to voice this concern among fellow scientists with mixed success because of several reasons that I expressed in my guest post and further expand below. The difficulties to explain the concerns I voiced in the post are due to:

      (i) the inertia of believing more a “credentialized” authority over any “outsider” critic, even outsider critics who are themselves scientists. As a first reaction, lot of colleagues, in my field and in other fields, tend to identify the likes of Prof Glantz (to name a visible case) as fellow academics in another field, hence they assume that they form a similar niche as themselves. Academics I have talked to got their professorships and their positions from passing a strict peer review evaluation of their academic merits, hence recognition of their expertise is based on their actual academic merit. They assume as a first reaction (from the external niche similarity) that this should also apply to tobacco control professors and academics.

      Therefore, since we scientists tend to be skeptic by training, we tend to be (at least as a first reaction) suspicious when somebody is firmly disputing the expertise of another scientist: the disputer could be either a crank (if a non-scientist) or if a fellow scientist, then he/she could perhaps be misguided/naive or ignorant or a peddler of some vested interests. In my case, since I am a known cigar smoker and vaper, fellow scientists tend to assume as a first reaction that my criticism of tobacco control science stems from my personal resentment of smoking /vaping bans more from my own personal expertise and/or my examination of medical literature.

      Most colleagues never pass from this first reaction and do not bother engaging the evidence that I can show them (which takes time and patience to understand and appreciate). A minority is willing to listen and after looking at the issues they realize that I have a strong case, but even then they are reluctant or unwilling to take action. However, it is a long uphill battle and I regard as an achievement the fact that at least some fellow scientists are becoming aware of the abuse of science taking place to justify uncritically the current regulation of tobacco and nicotine and the junk science defamation of e-cigs and smokeless tobacco.

      (ii) many academics in other disciplines do share the authoritarian regulatory attitude of those who conduct the “tobacco control” science. The overwhelming majority of fellow academics do not smoke nor vape, with a large minority of them viscerally disgusted by cigarette smoking. When engaging these colleagues I tell them that the important issue is not “liking or disliking cigarette smoking”, nor is denying real harms from primary smoking (and the science behind it), but opposing the justification of an authoritarian regulatory policy given in terms of lies (comparable to crank opinion) that pass as medical science (the ETS issue, rejection of THR and defamation of e-cigs and snus).

      Some of these academics see my point and agree, specially on the THR approach and on not going too far on shaming and stigmatizing smokers, since after all they are all well aware that modern medicine, while still reliable, is full of vices and corruptions (thus it is ridiculous to see it as something “cast in stone” that cannot be disputed).

      However, some fellow academics fully justify and endorse tobacco control type of science abuse on the lines as “the ends justify the means”, in other words they admit that a lot of anti-smoking regulation (outdoor bans) is cruel and authoritarian and that is based on junk science, but (they retort) the eradication of the hated cigarette makes it worth pursuing. They are authoritarian eugenicists. They don’t realize that if this type of eugenics (specially when based scientific fraud) is not contested, it leads to a slippery slope of an authoritarian control of lifestyles for “the better good” that can easily get out of control. We are all aware how this type of utopias have lead to disaster.

      I try to emphasize that, whether one agree or disagrees with a regulatory policy, it is very important to examine its basis on actual science, not on the needs of activism or advocacy even if those advocating have the best intentions. The danger is, as you say, that the activism disguised as science by tobacco controllers could discredit legitimate and well conducted public health science.

      Finally: in my opinion, there are reasons to be a bit cautiously optimistic: the tobacco regulatory paradigm devised in the 1990’s, based on coercive abstinence and deliberate misinformation on ETS damage (together with infinitely bad “Big Tobacco” and a junk “addiction” theory) has now run its course, it has become sterile dogma devoid of real scientific value. As all rigid ways of thinking, sooner or latter it starts cracking. Those who uphold this dogma (or benefit from it) know that once a little hole is pierced in the doctrine its decline becomes unstoppable, hence their desperate opposition to THR. The paradigm shift towards THR can reactivate and regenerate tobacco/nicotine science and (hopefully) provide an alternative more humanist regulatory policies. Evidently, this implies a clear public health benefit. I believe that this process has already begun and will overcome the old authoritarian eugenic paradigm in the end.

      • natepickering

        “They assume as a first reaction (from the external niche similarity) that this should also apply to tobacco control professors and academics.”

        It seems like actual scientists would be the first to point out that the term “tobacco control” itself explicitly acknowledges a lack of objectivity. That tobacco needs controlling is simply taken for granted as a fact about the world.

        • Carl V Phillips

          Here is a twist on that, and it actually makes it worse than you suggest: You actually could have a scientifically legitimate journal of tobacco control, just as you could have a legitimate (in the same sense) journal about weapons or computer virus design, or studies of how to make slavery more efficient. Just because something is an inherently malevolent force does not mean that it cannot be studied with legitimate science. Some of that science would inevitably further the malevolent cause, of course (science has that unfortunate property of not inherently favoring good over evil), though some of it would be of more use to its opponents.

          But Tobacco Control is not actually scientific journal dedicated to studying tobacco control. As Roberto points out, it is an activist magazine dedicated to promoting tobacco control by publishing stuff that resembles science. Indeed, a scientific journal devoted to studying tobacco control would look a lot like this blog (and Rodu’s and Snowdon’s): Most of what it contained would show that tobacco control claims are mostly lies and that their policies are basically all failures.

          I should try to remember to mention that in my next post on this.

        • Roberto Sussman

          The term “control” after “tobacco” already betrays the fact that what we call “tobacco control” is a predominantly political enterprise, not a scientific or (even) administrative one. For this reason internal scrutiny is insufficient and could even be a political farce (like internal scrutiny within the Academy of Science of the SSSR in Lisenko’s time). However, it would take a major political upheaval to force academics and physicians in “tobacco control” to accept external scrutiny conducted (say) by scientists in other fields.

          In its correct inception the term “control” should be applied to contagious plagues or epidemics, which may require aggressive regulation that could even violate civil liberties in extreme cases. Current tobacco controllers see it this way, but it is a foul approach. Tobacco/nicotine usage is neither a plague nor is epidemic, at worse it can be “endemic” or “pandemic”. I have read in critiques to medical science that one of the intellectual mishaps of current medicine (since the late 1980’s) is to automatically transfer the methods used successfully to treat contagious disease to diseases somehow connected to lifestyles (“non-contagious disease). Hence, the allusion to the “tobacco epidemic” ravaging the world that is in need for “tobacco control” (just replace “tobacco” with “malaria” to see the parallel).

          I have explained all the points we have discussed here to colleagues (physicists and mathematicians). Those who have been patient enough to listen are fully convinced that current tobacco/nicotine regulation based science is not real science, even if they personally dislike tobacco. The end of this pseudo-science will come the day when scientists like them conduct an external scrutiny on it.

          I fully agree that tobacco and nicotine issues can be examined without abusing or deforming science. The right term for a journal undertaking a proper scientific approach would be (in my opinion) Tobacco & Nicotine Science and Regulation, not “Tobacco Control”. However, the name change would be cosmetic without a major necessary change for such journal to be a scientific journal: the regulatory policy recommendations should follow (and be conditioned) by scientific findings subjected to hard reviewing, as opposed to the current approach in which non-rigorous science is “commissioned” to justify a rigid pre-defined regulation. Also, a scientific approach to tobacco/nicotine would have to be primarily based on a THR not only on abstinence (specially coercive abstinence). Today these changes sound like pipe dreams, but I am convinced that they will eventually occur.

          Among the major players in tobacco control there was a physicist called James Repace who worked for the EPA (they guy who claimed that winds of 300 mi/hr would be needed to clear ETS in smoky bars). I read some physics stuff he wrote on ETS . It is extremely low quality junk. Had I been a referee of his papers I would have never allowed such rubbish to be published (let alone justify regulation). There is good epidemiological research on ETS (for example Enstrom-Kabat) published between the 80’s and 2005, but current medical journals ignore it when dealing with tobacco/nicotine issues because it does not justify the desired policy (intrusive and extensive smoking bans). Instead, a lot of junk on ETS has been commissioned to replace that body of decent research. Minimal consistence would imply forcing current tobacco science (the researchers, the reviewers and the editors) to acknowledge this research and to base regulatory recommendations on it.

          Now, we may ask, suppose that current tobacco/nicotine researchers finally admit they have lost the science battle. Suppose they are forced to admit that the scientific evidence for harms from ETS (which justifies aggressive regulation) is weak (in the best case) and complete rubbish (in the worse case). Is this sufficient for producing a shift of policies? Likely not in the short term, but perhaps yes in the long run. Most lay physicians and lay public (including politicians) are still not aware of the lack of scientific basis in the claim that ETS is a grave health hazard. Once this fact becomes more widespread and better known by the broad society, it will be used to challenge the most extreme policy abuses (like outdoor bans or the proposed smoking prohibition inside the homes of HUD public housing) as violations of civil liberties. I am convinced this will happen sooner or later.

  6. The king is indeed naked.

    Thank you for this post Dr. Sussman!

  7. Tobacco Control may “undertake(s) a formal peer-reviewing process” but I do not credit a journal who ascribes its peer reviews purely to anonymous reviewers as “formal” or even real peer review. Of course they may hide the identities of their “peer reviewers” only in the cases when they want to give negative reviews, but it was still a surprisingly sleazy approach to reviewing serious research that goes against their desired conclusions. (An early, unedited version of the research that was submitted can be accessed at: https://www.scribd.com/document/9679507/bmjmanuscript )

    – MJM

    • Roberto Sussman

      Hi Michael. Practically all peer reviewing in physics journals is undertaken by anonymous referees. I myself act as an anonymous referee for various journals about once every two months. The justification for this anonymity is to allow reviewers to criticize manuscripts without being pressed. The system is not perfect, since referees can also abuse the non-disclosure of their identity. Some argue that anonymity should be reciprocal, ie “double blind” when both authors and referees act anonymously.

      In my view, the problem with the reviewing process by Tobacco Control is not the anonymity of referees, but its action as enforcers of research to comply with a previously agreed indisputable regulatory policy. I have seen your submitted manuscript. As far as I can tell it is technically correct, well motivated and well written, but it has no chance to be published by the Tobacco Control journal: it certainly clashes with the tobacco regulatory policy whose promotion is the central concerns of the journal. There is no secrecy in this, it is officially stated in their recommendations for prospective articles (that I cited in my essay).

Leave a comment