Same Data, Different Results
How the Choice of Analytical Method Changes Science
When the same data is analyzed differently by various researchers, the conclusions drawn can vary significantly. A large-scale study published in the prestigious scientific journal Nature now confirms this.
To the Point:
- Key finding: The same data lead to different results when different analytical methods are used. A study published in Nature shows that nearly 500 analysts drew different conclusions when analyzing 100 studies in the social and behavioral sciences.
- Analytical variability: The choice of analysis method has a significant impact on the findings. Decisions regarding data cleaning, variables, and statistical models are crucial and can lead to different outcomes.
- Scientific expertise: Varied outcomes do not reflect a lack of expertise. Even experienced researchers often arrive at divergent interpretations, which highlights the flexibility of data analysis.
When the same data is analyzed differently by various researchers, the conclusions drawn can vary significantly. A large-scale study published in the prestigious scientific journal Nature now confirms this. The study found that, when nearly 500 independent analysts reanalyzed the same data from 100 social and behavioral sciences studies, their conclusions were completely different from those of the original researchers.
The study was led by an international research team headed by Balázs Aczél and Barnabás Szászi from Eötvös Loránd University and Corvinus University of Budapest. Matthias Burghart, a postdoc at the Max Planck Institute for the Study of Crime, Security and Law (Freiburg), co-authored the study. Researchers from the Max Planck Institute for Human Development (Berlin), the Max Planck Institute for Behavioral Economics (Bonn), and the Max Planck Institute for Empirical Aesthetics (Frankfurt) also contributed.
Analytical flexibility can lead to different conclusions
Over the past decade, the social and behavioral sciences have introduced extensive reforms to make research more transparent, rigorous, and reliable. Pre-registration, registered reports, replication studies, and checks on analytical reproducibility all aim to reduce serendipitous findings and biased results.
Yet one central question remained largely unexplored: How exactly does the way data is analyzed affect the results? In standard scientific practice, a single research team typically analyzes a dataset and publishes only one result—the one obtained using a specific analytical approach.
In fact, data analysis involves many small but crucial decision points: How is the data cleaned? What variables are included? Which statistical models are used? And how are the results interpreted?
Together, these decisions constitute what is known as analytical variability—the flexibility that arises from data analysis. Often, there is no single “correct” approach. Many of the decisions are well-founded from a technical standpoint and legitimate in and of themselves. But, this flexibility can lead to very different conclusions being drawn based on the same data.
Different conclusions do not reflect a lack of professional competence
The researchers analyzed 504 different versions of data from 100 studies. The study involved 457 independent analysts who were given the same data and the same central research question. Five analysts were responsible for each paper. However, they were allowed to choose how they wanted to analyze the data.
The results: Most reanalyses confirmed the main assumptions of the original studies, but effect sizes, statistical estimates, and degrees of uncertainty often differed significantly. In only about one-third of all cases did all analysts arrive at the same result as the original authors.
What was particularly striking was that even researchers with many years of statistical knowledge and experience frequently reached different conclusions. This suggests that analytical variability is not due to a lack of expertise, but rather to the inherent flexibility of data analysis.
Another factor: Observational studies proved to be less robust than experimental studies. More complex data structures allow for more analytical options—and thus also more uncertainty.
A New Perspective for Science
“These results do not call into question the credibility of earlier research,” says Balázs Aczél of Eötvös Loránd University. “Rather, they show that presenting a single analysis often does not reflect the true extent of empirical uncertainty. Ignoring analytical variability can lead to a false sense of confidence in scientific conclusions.”
Barnabás Szászi adds: “We therefore advocate for broader use of multi-analyst and multiverse approaches—especially for questions of high scientific or societal importance. Instead of searching for a single true answer, these approaches reveal how stable—or fragile—scientific conclusions actually are.”
A Call for Greater Transparency
The study, which has now been published in the renowned scientific journal Nature, shows that science should no longer rely on a single result—but rather on a variety of possible interpretations. The goal is to disclose uncertainty and communicate transparently how robust or fragile a scientific result truly is.
In a world in which data is growing exponentially and research is becoming increasingly complex, this new perspective could be an important step toward more responsible and open science.
www.nature.com/articles/s41586-025-09844-9
Background
The study was conducted as part of the DARPA-funded SCORE program, which is dedicated to increasing the reliability and transparency of scientific research. The findings underscore the need to document and evaluate both the methodology and the analytical processes transparently.
