The politics of methods in the controversy over how to treat coronavirus

This article is part of the series Governance, in crisis.

Annabelle Littoz-Monnet
Professor, International Relations/Political Science
Graduate Institute

Juanita Uribe
PhD Candidate, International Relations/Political Science
Graduate Institute

Synopsis: The quest to find a COVID-19 treatment has incited a highly publicized debate related to longstanding questions about scientific methods and public health interventions. It calls for greater reflection on the assumptions and limitations of knowledge and its underlying political and social facets.

Keywords: COVID-19, expertise, scientific method, authority, bioethics

Controversies have peppered the history of medicine and science more broadly. The politicization of some of these controversies has prompted the questioning of scientific and expert authority. In this context, scientists, experts and researchers, along with domestic and global governors in the domain of health have tried to reassert the legitimacy of science, trying to delineate ‘good’ from ‘bad’ science in various ways.

The COVID-19 crisis has prompted a new and highly mediatized scientific controversy about how to cure people affected by the coronavirus 2 (SARS-CoV-2). At the heart of this controversy was the highly mediatized proposal from French doctor Didier Raoult to treat people with COVID-19 by means of a protocol combining hydroxychloroquine (an old anti-malaria drug) and azithromycin (an antibiotic). The ‘politics’ of science came frontstage in this conflict, including links between the production of science and the private sector, personal struggles among researchers who strive for recognition and authority, and the intermingling of science with policy. Of course, stakes are high not only in regard to public health, but also in terms of potential economic gains and scientific visibility for those who discover a treatment or produce work that becomes highly visible.

Debates over scientific research methods have been at the core of this controversy with associated attempts from scientists and public authorities to credit or discredit studies and their respective results. For scientists, methods have been the vector through which the boundary between good and bad science has been managed. For public authorities, methods have been invoked to assert their own competence and neutrality, what governments themselves call ‘evidence-based’ policy. The World Health Organization (WHO) has been at the front stage of these claims, positioning itself as a scientific and apolitical organization in the midst of this controversy.

When Dr Didier Raoult published his first study (and since then second and third studies), the results have been attacked by some scientists on two grounds: the sample was too small (first and second study) and his work did not rely on randomized controlled trials (RCTs) (i.e. his research design did not include the use of a control group). Some of these critiques were amplified in the media, with the claim that one could not know the effectiveness of the medicine if the group observed was not compared to a group having received no treatment. The WHO also criticized the lack of ‘conclusive evidence’ in support of hydroxychloroquine, warning against the use of ‘untested drugs’ to treat patients, with WHO Director General Tedros Adhanom Ghebreyesus adding that ‘Small, observational and non-randomized studies will not give us the answer we need.’

But when one looks at history, the idea that RCTs offer more reliable evidence than any other method is a relatively recent one. Although RCTs came into common use in the 1930s, it was not until the early 1990s, with the emergence of the ‘Evidence-Based Medicine’ (EBM) movement, led by a group of Canadian epidemiologists, that this method started to be considered as the gold standard in medical practice. Early EBM proponents called for the replacement of the ‘old paradigm’, in which the practitioner’s intuition, clinical experience and observations acted as sufficient grounds for clinical decisions. Central to the new approach was the understanding of ‘evidence’ as hierarchical, with systematic reviews and meta-analysis of RCTs at the top and observational studies at the bottom. Clinicians were instructed, henceforth, to base their decisions on the best available evidence.

The controversy around hydroxychloroquine and what Raoult has called the ‘moral dictatorship of methodologists’ relates to the more fundamental question of which types of knowledge come to be validated as ‘truth’. With the advent of EBM, RCTs started to be considered as a self-evident and superior solution to all questions in medical care. In that sense, it is not surprising that when Dr Raoult first published his study, it was discredited by some scientists, who argued the results were only based on ‘anecdotal evidence’.

The French National Institute of Health and Medical Research announced recently the launch of ‘Discovery’ a large European clinical trial of experimental drugs which includes the test of hydroxychloroquine. Few days later, the European Medical Agency welcomed the initiative, warning that hydroxychloroquine could only be used in clinical trials or emergency use programs, as such trials ‘will enable authorities to give reliable advice based on solid evidence’.

However, RCTs, just as any other type of knowledge, are rooted in specific theoretical assumptions about nature and the ways in which it can be best understood. While RCTs can help measure the outcomes of a particular intervention, they are limited when it comes to understanding multi-causal and context-dependent phenomena. Like other existing research methods, RCTs embody certain biases. Of course, an obvious form of bias might be present when trials, often with positive outcomes, are funded by biopharmaceutical companies. But beyond this, and even when trials are publicly funded, a number of decisions are taken by researchers at every stage of the research design. Formulating the research question, selecting the variables, assembling the sample before it is randomized, conducting the analysis of the data and interpreting the results all involve human decisions which reflect certain assumptions and theoretical presuppositions. Even the processes of randomizing, blinding and controlling involve decisions at every step of the process. To mention one example, choosing when to end a trial and thus when to collect endline data directly affects the nature of the results and thus the kind of claim which can be made about the effects of a treatment.

While such biases are inherent to any type of methods, they are never made visible in RCTs. Such trials are presented as ultimately objective and thus authoritative. However, major discoveries in medicine, such as the small-pox vaccine or penicillin were also made through observational methods, the value of which stands ‘below’ RCTs in the hierarchical scale set by the paradigm of EBM.

In addition, ethical concerns relate to the practice of RCTs. Inherent to the practice of RCTs is the burden placed on some people to accept a level of risk, in the hope that it will benefit others. Risks are related to the possibility of harmful effects of a tested treatment, but also to being placed in a placebo group being denied the best available treatment. Currently, in the face of a global pandemic, this is one of the main dilemmas practitioners and public health authorities ought to be addressing.

There is not a simple and unique answer as to what counts as ‘best’ evidence in public health interventions. Each way of producing knowledge comes with its own assumptions and related limitations. These reflexions have clear implications for global health, a multi-faceted governance domain in which problems often require more than a single causal explanation. There is a need, therefore, for pluralist and flexible methodologies that can answer questions beyond ‘what works’. Questions such as who counts, what is acceptable, and under which circumstances, invite us to expand existing methodological boundaries in ways that acknowledge both the validity and complementarity of diverse forms of knowledge.

Ongoing debates about how to treat coronavirus have been framed in highly technical terms. But behind ongoing controversies over scientific methods broader politics have been at play. This is also true in relation to broader debates concerning public responses to address the pandemic, in which highly technical forms of knowledge (such as projections made by mathematicians or statisticians) have informed policy, often concealing the more fundamental political and social questions which should have been debated.

This article is part of the series “Governance, in crisis”.  To read the other articles, click here.

Photo by Science in HD on Unsplash

Leave a Reply