What’s not there

In How to make the world add up: Ten rules for thinking differently about numbers,* economist Tim Harford’s Rule Six reads

Ask who is missing.

It is sound advice.  Too often, we are so busy thinking about what IS there that we forget to look for what IS NOT there.  Looking at studies and surveys and pondering their conclusions and implications, it is important to know who and what were surveyed and studied, where and when and how the investigations were carried out.  With surveys, we need to know the demographics of the sample investigated, since factors such as age, gender, place, ethnicity, religion, class or wealth, job or employment and many other factors including the size of the sample and how participants were chosen could affect conclusions about whatever is being studied, including considerations of whether those conclusions might – or might not – apply to those who were not studied, did not take part in the study.  Unless the sample includes everyone in the population, we cannot (at least, we should not) generalise and claim that whatever we have concluded applies universally.

Caroline Criado Perez makes this point over and over in her book, Invisible Women: Exposing Data Bias in a World Designed for Men.  I think this paragraph from my post Invisible women (31 August 2019) (in part a critique of her work, in part a criticism of her publishers) makes the point well:

Perez demonstrates how women are often under-represented in and even excluded from tests of new drugs.  When volunteers are invited to participate in trials, there are often more men than women taking part – possibly (says Perez) because men have more free time than women, who often have invisible, uncounted caring work which needs to be done in their “free” time – “caring” including care for elderly or young relatives, including the shopping and cooking and house-care, and more.   It may even be that women are deliberately excluded from drug trials because they react less consistently than men,  their metabolisms are different, the stage of the menstrual cycle may affect their reactions. Pregnant women may be excluded for fear of damaging the foetus or the mother or because, again, the stage of the pregnancy may affect reactions. Women may just be too complicated to figure out – so rather than testing to discover if smaller (or larger) or even variable dosages might be helpful, it is easier to ignore women altogether and just let the world assume that the effects on women will be the same as on men.

Asking who is missing is an important consideration when evaluating a source whether online or in print, who is missing? what is missing?

Asking who is missing was certainly a consideration in Elizabeth Bik’s critique of Philippe Gautret, Didier Raoult and others’ paper Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial, published in the July 2020 (vol 56, no. 1) issue of the International Journal of Antimicrobial Agents.

Side note:  Philippe Gautret is the first named of 18 authors, Didier Raoult is the last named and noted as the corresponding author. While academic papers tend to cite Gautret as lead author, many press reports refer to Raoult’s paper.

Although Elizabeth Bik critiqued a preprint of the paper (posted 20 March 2020), her comments stand with the published version.

Gautret’s paper appears to have been one of the influences behind President Donald Trump’s promotion of hydrozychloroquine as a cure for coronavirus in March and April 2020 (see for instance Hydroxychloroquine: how an unproven drug became Trump’s coronavirus ‘miracle cure’), but the paper has been widely criticised and appears to be deeply flawed. The American Food and Drug Agency (FDA) approved an Emergency Use Authorization on 28 March 2020, allowing the drug to be used in certain circumstances but withdrew the authorization less then 3 months later because, as stated in MedlinePlus:

clinical studies showed that hydroxychloroquine is unlikely to be effective for treatment of COVID-19 in these patients and some serious side effects, such as irregular heartbeat, were reported.

Elizabeth Bik’s article Thoughts on the Gautret et al. paper about Hydroxychloroquine and Azithromycin treatment of COVID-19 infections on her Science Integrity Digest blog (24 March 2020) raised a number of issues with Gautret-Raoult study and the manner of its rushed acceptance by the International Journal of Antimicrobial Agents (IJAA).

Among other things, she criticises the methodology, especially the non-random highly-selective choice of the patients studied both in the study group (given hydroxychloroquine)  and the control group (not given hydroxychloroquine).  Granted, the authors state in the very title of the paper that the study was non-randomized. What they do not explain (one example of what is missing) is how the group into which patients were assigned was determined.  Place of treatment was probably a factor as all the study group were in a hospital in Marseilles, but the so-called control group was spread across hospitals in four different cities (one of which was the Marseille hospital).  As Bik points out, the lack of transparency with regard to selection could make for issues of bias, possibly unconscious; furthermore and with the control group beng treated in different hospitals, control of treatment of the control group could not be controlled, other factors may have been introduced or absent.

An even bigger “who is missing?” issue is the fact that 26 patients started the study, treated with hydroxychloroquine, but the analysis and discussion centres on just 20 of the patients.  It appears that 1 of the 6 missing patients died, 3 of the missing patients were transferred to intensive care (they must have got sicker, not better) and 2 stopped taking the drug, one suffering from nausea, the other released from hospital, better without the drug.

And when you exclude these 6 patients from the study, miracle of miracles, the condition of the 20 remaining patients improved.  Twenty out of 20 patients improved, that is 100% efficiency, no wonder Donald Trump enthused!  There is no telling whether those 20 would have got better anyway without the drug (and later studies and trials clearly find little or no evidence that it has effect or efficacy).

It might be worth a sideways look at a May 2021 report Scientists rally around misconduct consultant facing legal threat after challenging COVID-19 drug researcher in Science, an online newsletter published by the American Associaton for the Advancement of Science (AAAS). It appears that after publishing the commentary, Elizabeth Bik learned that Didier Raoult had been accused of scientific misconduct and temporarily suspended from publishing in American Society for Microbiology journals. She began looking at his earlier work and found at least 63 papers with possible image manipulation, possibly flawed ethical procedures and other problems.  Raoult has accused Bik of harassment and blackmail, and according to the AAAS newsletter article, Bik has been supported by more than 1000 fellow scientists.

Elizabeth Bik, described in The Guardian as “a world-renowned Dutch expert in identifying scientific misconduct and error,” is not alone in finding problems in Gautret’s study but she was one of the first to publish her misgivings. It should also be noted that, despite withdrawal of the FDA’s  Emergency Use Authorization, there are many supporters of Gautret and Raoult, those who advocate the use of (and/or further investigation of the use of) hydroxychloroquine in the prevention and treatment of Covid-19.  Although they are far from academic papers, it is worth noting some of the articles and comments on Retraction Watch‘s reportage of Didier Raoult’s work and studies.

Coronavirus and the pandemic is a popular subject for students conducting inquiry projects. Little wonder – their lives are very much affected by measures to lessen the spread of Covid-19, their education very much affected along with other aspects of their upbringing, health and general well-being.  Students are not experts in any aspect of medicine and epidemiology – and nor am I.  All the more reason that we use all the critical thinking and evaluation skills, strategies and techniques available to us, going beyond simple CRAAP and CARRDS and similar tools;  lateral reading and SIFTing is essential, discovering what others have said about the sources and studies we come across and the people and organisations behind them, keeping an open-mind and constantly questioning what we read. Questions such as “how do they know this?” and “does this follow from the evidence?” and “is there a hidden agenda?” and “what is missing/ who is missing?” are vital tools in the toolbox.  So is going beyond the questions and finding out for oneself.

The other side of the coin

I have to admit – I am not an expert in anything. I do not understand all that I read in academic papers; many, probably most, are way beyond my ken. Nevertheless, I enjoy reading them, those that catch my eye and interest.  I learn much about research methods and academic writing (and about the subject-matter), I become aware of what to look out for, and also what researchers should be aware of when researching and writing their studies. I gain appreciation of how journalism and academic publishing works, and again what I need to be aware of and keep in mind.  I think that reading academic papers and reports and comments on those papers add to my evaluation toolbox and reinforce my convictions concerning and advocacy for honesty and academic integrity.

Which leads us to the May/June 2021 special issue of the American Journal of Health Behavior (AJHB). The Special Open Access Issue on JUUL comprises eleven research studies and two editorial articles on JUUL, a leading global manufacturer of electronic cigarettes (EC).

While AJHB does not impose a submission fee for consideration of submitted papers (one of the features of predatory journals), AJHB does charge authors a publication fee of US$895 and an additional US$700 if authors wish to have their papers made open access with no paywall restrictions (AJHB : Author Fees).

The May/June 2021 special issue is not just ON JUUL; the entire issue has been researched and written BY authors employed directly or indirectly by JUUL, the issue PAID FOR BY JUUL.  There is no attempt to hide this; every article in the issue includes very open Conflict of Interest Statements, in accordance with the AJHB’s Ethical Guidelines.

Conflict of interest statements are important. When we see one, it should raise questions in our minds as to whether the interest might have created a bias in an author’s mind, a conscious or subconscious propensity to cherrypick favourable evidence or to play down or ignore counter-evidence – somewhat similar to the exclusion of the patients who died or got sicker, not better, in the Gautret-Raoult study we looked at earlier.  This is not to say that this has happened in any specific paper but the aware reader must be aware of the possibility.  In spades when a whole issue has been taken over, as in the AJHB Special Issue.  And in this special issue, every one of the 26 named co-authors declares conflict of interest, 18 of them currently or then employed by JUUL, 5 more employed by a consultancy firm with very close ties to JUUL Labs (4 of whom consult exclusively for JUUL), and the remaining 3 work for the Centre for Substance Use Research (CSUR), described (on page 432 of the special issue) as

an independent research consultancy which designed the study and assessments described in this paper and oversaw collection of data through Dacima Inc under contract to Juul Labs Inc.

One might wonder at CSUR’s independence, given that they were employed by JUUL to carry out the research. Indeed, the Centre for Substance Abuse Research has a single-substance brief,

(to provide) behavioural science support to companies seeking regulatory approval for their ENDS products (from the section About Us on the CSUR homepage).

We should not summarily dismiss research performed by or for specific companies, organisations, associations or industries. They need to conduct research and it cannot always be independent research. And this is why we must always be aware of possible bias and potential conflict of interest, on the lookout especially for what is missing, who is missing, what we are not being told.

JUUL

JUUL is a manufacturer of various electronic smoking (e-smoking) products.  On the About JUUL page on its website, JUUL claims that it markets its products as

Designed with adult smokers in mind

and as

an exceptional nicotine experience designed for adult smokers looking to transition away from traditional cigarettes.

Note the repeated use of the term “adult” : JUUL has been heavily criticised for attracting younger smokers to e-smoking, these younger smokers then transitioning to cigarettes.  The maker of Marlboro cigarettes, Altria, bought a 35% stake in JUUL in July 2018, as reported by CNBC Health and Science.  It might be thought that Altria thus win both ways, selling a product to help addicted users of their products to wean themselves off cigarettes while at the same time selling a product which creates new nicotine addicts.

The CNBC article states that JUUL had about 75% of the e-cigarette market (in the USA). This market share is much decreased since 2018 as concerns about its popularity among the young and their subsequent addiction to nicotine have increased.   Sheila Kaplan, “a prize-winning investigative reporter who covers the Food and Drug Administration, the tobacco industry and the intersection of money, medicine and politics” for the The New York Times reported on 5 July 2021 that Juul Is Fighting to Keep Its E-Cigarettes on the U.S. Market.  She notes in her article that the FDA has been investigating JUUL, and within the next few months is to rule on

whether Juul’s devices and nicotine pods have enough public health benefit as a safer alternative for smokers to stay on the market, despite their popularity with young people who never smoked but became addicted to nicotine after using Juul products

Kaplan notes that JUUL spent US$3.9 million on federal lobbying in 2020 (and Altria spent nearly US$11 million).    The US$51,000 JUUL paid to buy the Special Issue of the AJHB may well be regarded as small price to pay to have a range of research studies demonstrating its efficacy in helping smokers give up cigarettes onto the record,  ““proof” that its product has a public-health benefit”  as Davd Dayen suggests in The American Prospect, Juul: Taking Academic Corruption to a New Level.

None of this is to say that the articles have no scientific, medical or academic merit. They could be really important contributions to out knowledge and awareness on the topic. But they are hardly independent studies.   Great care is needed, as we read them and as we use them, we must have our critical faculties on the highest level of alert.  Dayen certainly does; he goes on to say

Pretty much all the articles take the Juul party line that e-cigarettes help convert smokers away from combustible tobacco products, and thus aid public health. Pretty much none of the articles mention that Juul and other vaping companies make their money by attracting countless new people to nicotine addiction. And it’s barely worth even commenting on the quality of the research when it all comes from the same corporate source.

As well as criticising  the papers (though with few specific comments on the papers and the studies themselves), Dayen also criticises the journal itself and the practise of paying-to-publish in general.  He argues that science and health should not be up for sale – but here the AJHB has “turn(ed) the work of a scientific journal into what looks like advertising.”

The title of Dayen’s article uses the term “corruption.”   Is this too strong a word? “Prostitution” was my first thought, taking cash for performing a service.  But on second thoughts, “corruption” – “academic corruption” – may indeed be more appropriate, since the service sold is an attempt to subvert others by throwing what may well be misinformation and disinformation into the mix. The tobacco industry has form in this regard.

I am reminded of Matthew d’Ancona’s suggestion that the tactics of modern “post-truth” can be found in the tobacco industry’s establishment of the Tobacco Industry Research Committee in 1954, when there was more and more acceptance of links between smoking and lung disease.  The Committee was established not to disprove any connection but to suggest there was no proof, other factors could be responsible.

(The Tobacco Industry Research Committee) sought not to win the battle outright, but to dispute the existence of a scientific consensus. It was designed to sabotage public confidence and establish a false link between those scientists who detected a link between tobacco use and lung cancer and those who challenged them. The objective was not academic victory but popular confusion. As long as doubt hovered over the case against tobacco, the lucrative status quo was safe.
Matthew d’Ancona, Post truth (p. 42). **

A closer look

I have neither science enough nor time enough to go through all the AJHB papers (at this time) but I have read the opening editorial by Saul Schiffman and Erik M. Augustson, to get an overview of the articles in this issue.  Shiffman works for PinneyAssociates, and “provides consulting services on tobacco harm reduction on an exclusive basis to Juul Labs Inc.”  He is also internal editor and coordinator for all papers in the special issue;  he is lead-author on 3 of the studies and co-authored 7 of the other 8 articles.  Augustson is a JUUL employee and co-authored 5 of the papers.

I am not sure what the responsibilities of an internal editor are; proof-reading might not be among them.  The abstract for the “Introduction to the Special Issue on JUUL use” (on page 397) ends with what I think is a typo: controlnt.

There is at least one more typo; on the next page (p. 398), we read (second sentence of this paragraph):

One of the most widely used ENDS in this US is the JUUL System…

“In this US?  Which US?

These may be small points.  Of greater concern was this statement, also on page 398 (the highlighting is mine):

That took my attention.  “The Royal College of Medicine (United Kingdom) concluded that ENDS are likely to be at least 95% less risky…”  That is quite a claim.   ENDS – “electronic nicotine delivery systems” as is spelled out earlier in this paragraph – may be 95% less risky than cigarettes?  That demanded verification, did the Royal College of Medicine (United Kingdom) really suggest this? The second study mentioned here, that of the US National Academies of Science, Engineering and Mathematics, found that ENDS is “associated with less risk than smoking” –  but how much is that “also” worth: 90% less risky, 50% less risky, 5% less risky?

The 3-page introduction to the Special Issue has 31 endnoted references, superscript signals for items 8 and 9 are highlighted in the screenshot above.  Why are there no references for the Royal College of Medicine (UK) or the US National Academies of Science, Engineering and Mathematics?  This just goes on raising questions.

One reason for the lack of citation here might be that there is NO “Royal College of Medicine” in the UK. It does not exist.

The 95% claim is true – to a point.  A report (The Guardian again): Public Health England maintains vaping is 95% less harmful than smoking led me to an article by Public Health England (PHE) on a UK government website.

OK, that 95% stat appears to be true.  But this raises more questions.  “Around 95% less harmful” is not the same as Schiffman’s claim, “at least 95% less risky.”   The phrase “around 95% less harmful” (alternatively “around 95% safer”) is used 5 times in the full paper, E-cigarettes: an evidence update: a report commissioned by Public Health England as published in 2015 – but never a claim that it is “at least 95% less risky.”

But it might have been a later PHE report to which Schiffman refers. The lack of a bibliographic reference is so unhelpful. Three years later, Public Health England published an update. The Evidence review of e-cigarettes and heated tobacco products 2018: executive summary which noted a 2017 study which

concluded that the cancer potencies of (e-cigarettes) were largely under 0.5% of the risk of smoking (p.174)

and also that

comparative risks of cardiovascular disease and lung disease have not been
quantified but are likely to be also substantially below the risks of smoking (p.174).

Despite the lack of quantification, PHE suggested, as a policy stratagem (as against a research outcome),

Vaping poses only a small fraction of the risks of smoking and switching completely
from smoking to vaping conveys substantial health benefits over continued smoking.
The previous estimate that, based on current knowledge, vaping is at least 95% less
harmful than smoking remains a good way to communicate the large difference in
relative risk unambiguously so that more smokers are encouraged to make the
switch from smoking to vaping. It should be noted that this does not mean EC are
safe (p. 175 – my highlighting).

It may well be here that Schiffman found that “at least 95% less harmful than smoking” claim – but we might also question which “previous estimate” PHE is referring to; the same problem applies, “around 95% less harmful”  in the 2015 report is not the same as “at least 95% less harmful” in the later report.

Finding the press release and the PHE reports was relatively easy – the 95% makes a useful search term – though it would have been a whole lot easier had Schiffman cited the right body responsible and told us which particular document/s he actually used.  Deciding which National Academies report is referred to is less certain. It is probably the 2018 Public Health Consequences of E-Cigarettes, but it is not clear what the statement “associated with less risk than smoking” refers to.  The words “risk” and “risks” are used 22 times in the Summary of the National Academies’ report,  sometimes pointing to lower risk and sometimes to higher risk and above all pointing to uncertainty:

Exposure to nicotine and to toxicants from the aerosolization of e-cigarette ingredients is dependent on user and device characteristics. Laboratory tests of e-cigarette ingredients, in vitro toxicological tests, and short-term human studies suggest that e-cigarettes are likely to be far less harmful than combustible tobacco cigarettes. However, the absolute risks of the products cannot be unambiguously determined at this time. Long-term health effects, of particular concern for youth who become dependent on such products, are not yet clear (Summary, p.1).

Another strike against this Special Issue, that is at least four, maybe five problems – in just one sentence!

As I said, I do not have time or expertise enough to critique all the papers here – though I am hoping to find time to look more closely at one or two. I would like to know if there are problems with the actual papers, not just the editorial introduction. If any reader wants to take a close look, please do, please let me know your thoughts.

From the quickest of glances and in line with David Dayen’s analysis, there appears to be much on the health benefits of e-cigarettes as against cigarettes, there appears to be little on the issue of attraction of young e-cigarette users to cigarettes and the public health balance which the FDA is soon to decide on.  This, and the quality of the research itself – what is missing? – might be the area/s to delve into.

Tools for the toolbox

The topic of e-cigarettes could well appeal to our students, whether they use them or not.  They are teenagers, after all.  I can see extended essays written on e-cigarettes – with plentiful use of papers in the AJHB Special Issue.

And while as teachers we stress the the importance of critical thinking, the need to read actively, even when we agree with what we are reading (or viewing or hearing), I wonder how much our students understand. I wonder if we role-model enough, think aloud enough, give our students enough support.  Certainly, Extended Essay examiners’ reports across the subjects suggest that students do not ask enough questions about the sources they use; too many seem to accept information uncritically.

I have reported misgivings about CRAAP and CARS and similar evaluation tools in earlier articles, especially when they ask questions only of the source itself.  Lateral reading and SIFT and similar techniques take us outside the source-in-hand to discover what others say of the information presented, of its authors, of the publishing organisation and more. There is also the notion that we must look at the sources which our source-in-hand uses, the context of the information used.

So let’s finish with some questions we might bear in mind, especially with research reports but perhaps any reading matter – and further suggestions are welcome.  Note too that many of these questions might be pursued even without being an expert (or a student) in the field.

  • Who is behind the research?
  • Who is funding it?
  • Do they have vested interest in the conclusion/ outcome?
  • Who is conducting the research?
  • Who is reviewing it?
  • Who is publishing it?
  • Does the research question cover the issue /problem (or is it a blind alley, trying to divert our attention from the real issue/s?) ?
  • What evidence is given?
  • How is the evidence obtained?
  • Is the method (for collecting the evidence) valid?
  • Is there evidence of cherrypicking? What has been left out?
  • Are samples representative? Are they sufficiently large? Who is missing?
  • Does the evidence support the points made?
  • Do the conclusions follow from the arguments and points made?
  • How authoritative are the sources used in the literature review?
  • How authoritative are the sources used in the discussion?
  • Are the sources sound, reliable and reputable?
    Are the authors authorities / leaders in the field?
  • Are the sources reported accurately?
  • Are they taken out of context?
  • Who is not used? What is not used?
  • What has been left out of the investigation (what is missing? / who is missing)?
  • What possibly counter-evidence has been omitted – data etc thrown up in the investigation?
  • What possibly counter-evidence has been omitted – data etc from other works and workers in the field? \
  • What do others say about the people behind the paper, the authors of the paper, the publishers, the paper itself?
  • How is the paper regarded by others, what do the reports and reviews say?
  • Who cites the paper? What do they have to say?
  • Does anyone citing the paper have something new to add?

We might have doubts about the paper based on some of the above.  This is not a problem, we can still use it, as long as we make our doubts clearly known, say what our reservations are and do not invest overmuch credence in the paper without further support.

References

*  Tim Harford (2021), How to make the world add up: Ten rules for thinking differently about numbers, London: The Bridge Street Press.

** Matthew d’Ancona (2017), Post truth: The new war on truth and how to fight back. London: Ebury Press.