Evidence-Based: Evidence with Limits

Maria A. Principalli

10/11/20254 min read

The expression evidence-based has become so widely used, even in everyday language, that it is often treated as a synonym for reliability: something solid, unquestionable, and therefore unchanging over time. In a scientific context, however, the reality is quite different. Anyone who works with data knows that in science nothing is definitive or immutable.

So what does it actually mean to say that something is evidence-based?

The term evidence-based, literally “based on evidence,” refers to information derived from systematic research strategies. This evidence may take many forms: epidemiological data from observational studies, results from double-blind clinical trials, statistical models, or even computational simulations. All of these contribute to the advancement of knowledge, but they do not bring the same degree of confidence. Each comes with its own limitations, implicit assumptions, and margins of error. Referring to something as evidence-based without specifying how that evidence was produced often obscures, rather than clarifies, the actual state of knowledge.

The vast majority of studies from which this expression derives—whether observational, experimental, or descriptive—belong to the category of epidemiological studies. These analyze the distribution of events within a population of varying size over time, making it possible to identify trends, associations, and differences between groups. They provide valuable indications of what happens and how frequently it occurs. Their strength lies in large numbers and in their ability to capture real-world phenomena, but for this very reason they remain exposed to numerous confounding factors (biases). They can certainly suggest robust correlations, but they rarely allow direct causal relationships with a good degree of confidence.

In observational studies—such as cohort, case-control, or cross-sectional studies—researchers collect information on events without exercising any control over them; the researcher does not intervene but merely observes. These studies make it possible to explore relationships between variables in real-world contexts. Their evidence is highly useful but intrinsically limited: the absence of experimental control makes it difficult to distinguish what is directly related to an event, such as the onset of a disease, from what is influenced by external, unmeasured factors. The primary aim of this type of study is to demonstrate association rather than causation. Individuals exposed to a risk factor “x” may differ from those not exposed in other ways that independently affect their disease risk. If such confounding factors are identified in advance, they can be accounted for both in the design and in the analysis of the study. However, the chances that some confounding factors go unnoticed are high.

For this reason, the evidence produced by observational studies is valuable, yes, but limited: it allows the identification of even strong associations, but it rarely permits the establishment of direct causal relationships with a high degree of confidence.

Experimental studies, by contrast, are generally less susceptible to bias because the researcher actively determines who is exposed to a given factor and who is not. When assignment is random and sample sizes are sufficiently large, even unrecognized confounding factors tend to be evenly distributed across groups, reducing the likelihood that they will systematically impact the results.

This greater degree of control, however, comes with limits that are not merely methodological but also ethical. In research involving human subjects, individuals cannot be deliberately exposed to potentially serious risks. This restricts the use of experimental approaches in investigating the causes of disease for example. As a result, experimental studies are primarily applied to the evaluation of preventive measures or therapeutic interventions.

Randomized controlled clinical trials fall within this context and are often regarded as the most robust standard for assessing the effectiveness of interventions, particularly pharmaceuticals. Randomization, variable control, and—when feasible—double blinding reduce many sources of bias and allow cause-and-effect relationships to be inferred more reliably than in observational studies. Yet even this form of evidence is not absolute. Outcomes depend on study design, population characteristics and size, duration of follow-up, and experimental conditions. These conditions, by necessity, simplify biological systems that are actually highly complex.

It is precisely in the transition from the laboratory bench to communication that the term evidence-based reveals all of its ambiguity. In the attempt to make complex results more accessible, synthesis sometimes turns into extreme simplification and, in some cases, into an illusion of certainty. The expression “based on evidence” thus ends up suggesting a level of reliability that scientific knowledge, by its very nature, not only does not possess but actively rejects.

Synthesis is a necessary tool in scientific communication, but it becomes problematic when results are reduced to the extreme and presented as conclusive and definitive. In such cases, evidence ceases to be a partial description of what we know at a given moment and is implicitly interpreted as an established truth, no longer subject to revision.

When new data emerge that modify or completely overturn previous interpretations, this has a significant impact on the public perception of science. What represents a natural progression of knowledge for the scientific community may appear to the general public as a refutation or a contradiction. The revision of knowledge is thus read not as a confirmation of the robustness of the scientific method, but as a sign of uncertainty or inconsistency.

Trust erodes not because science “changes its mind,” but because it had been communicated as if it could not. The implicit promise of certainty—even when it arises from well-intentioned synthesis—exposes scientific communication to the risk of generating disillusionment precisely at the moment when knowledge evolves.

Speaking rigorously about evidence-based knowledge therefore entails an additional responsibility: clarifying what evidence is available, under what conditions it was produced, and what limits constrain its interpretation. Not to weaken the scientific message, but—paradoxically—to make it more robust.

Science cannot advance if uncertainty is eliminated. Communicating it without extreme simplifications means giving up reassuring definitive answers and accepting that knowledge proceeds through successive adjustments. Acknowledging these limits does not weaken science, nor does it reduce its reliability. On the contrary, it is precisely what makes progress possible. Scientific knowledge advances because it is willing to revise itself, to correct itself, and to remain, by definition, incomplete.

FURTHER READINGS:

Epidemiology for the uninitiated, fourth edition D Coggon, Geoffrey Rose, DJP Barker