Evidence-based: evidence (with reservations)

Maria A. Principalli, PhD

10/11/20254 min read

The expression “evidence-based” has become so overused, even in everyday language, that it is now frequently treated as a synonym for reliability. Something, in other words, that cannot change over time. In scientific practice, however, the reality is quite different: anyone who works with data knows that in science nothing is definitive or immutable.

So what does it actually mean to say that something is evidence-based?

The expression “evidence-based” (literally, “based on evidence” ) refers to information obtained through research strategies. This may include epidemiological data from observational studies, results from double-blind clinical trials, statistical models, or even computational simulations. All of these forms of evidence contribute to the advancement of knowledge, but they do not carry equal weight, nor do they entail the same degree of uncertainty. Each type brings with it specific limitations, implicit assumptions, and different margins of error. To speak of something as evidence-based without specifying how that evidence was obtained often serves to obscure, rather than clarify, the actual state of knowledge.

The vast majority of studies from which this expression derives, whether observational, experimental, or descriptive, belong to the category of epidemiological studies. These analyse the distribution of events within a population over time, making it possible to identify trends, associations, and differences between groups. They provide valuable indications of what happens and how frequently. Their strength lies in large numbers and in their capacity to capture real-world phenomena, but for precisely this reason they remain exposed to numerous confounding factors (biases). They can identify robust correlations, certainly, but few allow the establishment of direct causal relationships with any reasonable degree of confidence.

In observational studies, which may take the form of cohort, case-control, or cross-sectional designs, researchers collect information about events without any intervention on the part of the researcher, who observes but does not act. These studies allow exploration of relationships between variables in real-world contexts; their evidence is valuable, but intrinsically limited. The absence of experimental control makes it difficult to distinguish what is directly associated with an event (such as the onset of a disease) from what is influenced by unmeasured external factors. The primary purpose of this type of study is to demonstrate association, not causal links. Subjects exposed to a given risk factor ‘x’ may differ from unexposed subjects in other ways that independently influence their risk of disease. If such confounders are identified in advance, they can be accounted for both in the study design and in the analysis. There remains, however, the possibility that some confounders go unrecognised.

For this reason, the evidence produced by observational studies is valuable but limited: it can identify robust associations, but it rarely allows the establishment of direct causal relationships with any meaningful degree of confidence.

Experimental studies, by contrast, are generally less exposed to bias, since it is the researcher who determines which subjects are exposed to a given risk factor and which are not. When assignment is carried out randomly and the number of subjects involved is sufficiently large, even unrecognised confounders tend to be distributed evenly across groups, reducing the likelihood that they will systematically influence the results.

This greater degree of control has limitations that are not only methodological but also ethical. In research involving human subjects, one cannot deliberately expose participants to potentially serious risks. This constrains the application of experimental methods in the study of disease aetiology. For this reason, such studies find their primary application in the evaluation of preventive or therapeutic strategies.

This is the context in which randomised controlled clinical trials sit, often considered the most robust reference standard for evaluating the efficacy of an intervention, as in the case of pharmacological treatments. Randomisation, variable control and, where possible, double blinding reduce many sources of bias and allow the identification of cause-and-effect mechanisms that are more reliable than those produced by observational studies. Even this form of evidence, however, is not absolute. Results depend on the study design, the characteristics and size of the selected population, the duration of observation, and the experimental conditions adopted — which, by their very nature, tend to simplify biological systems that are in reality highly complex.

It is precisely in the transition from the laboratory bench to communication that the term evidence-based reveals its full ambiguity. In attempting to make complex results more accessible, synthesis sometimes transforms into extreme simplification and, in some cases, into an illusion of certainty. The expression “based on evidence” ends up suggesting a degree of reliability that scientific knowledge, by its very nature, not only does not possess but actively refuses.

Being able to summarize is a necessary ability of scientific communication, but it becomes problematic when results are simplified to the point of being presented as conclusive and definitive. In these cases, evidence ceases to be a partial description of what we know at a given moment and is implicitly recast as an established truth, no longer subject to revision.

When new data emerge that modify or completely overturn previous interpretations, this has a considerable impact on public perception of science. What the scientific community regards as a natural progression of knowledge can appear to the general public as a contradiction or a retraction. The revision of existing knowledge is thus read not as a confirmation of the soundness of the scientific method, but as a sign of uncertainty or inconsistency.

Trust erodes not because science “changes its mind”, but because it had been communicated as though it never could. The implicit promise of certainty — even when it arises from well-intentioned synthesis — exposes scientific communication to the risk of generating disillusionment at precisely the moment when knowledge evolves.

To speak of evidence-based in a rigorous way therefore means taking on an additional responsibility: making clear what evidence is available, under what conditions it was produced, and what limitations circumscribe its interpretation. Not in order to weaken the scientific message, but — paradoxically and on the contrary — to make it more robust.

Science cannot advance if uncertainty is eliminated. Communicating it without extreme simplification means accepting that there are no definitive reassuring answers, and that knowledge progresses through successive adjustments. Acknowledging these limitations does not weaken science, nor does it diminish its reliability. On the contrary, it is what makes progress possible. Scientific knowledge advances precisely because it is willing to revise itself, to self-correct, and to remain, by definition, incomplete.