Science has been under siege for a while now.
Throw the advent of artificial intelligence into the mix, and mounting evidence that coordinated scientific fraud appears to be accelerating at industrial scale, a new paper in Proceedings of the National Academy of Sciences poses an increasingly relevant question: What makes research trustworthy?
It’s not an abstract question. A late 2025 PNAS analysis documented an insidious network of paper mills, brokers, and complicit editors collaborating to publish fraudulent work across multiple ”scientific” journals. Researchers report that they pushed papers through in batches, and sometimes even resorted to “journal hopping” when oversight tightened.
Worse still, their data hints that suspected fraudulent publications are cropping up much faster than legitimate science, outpacing traditional safeguards such as retractions and journal de-indexing.
“If these trends are not stopped, science is going to be destroyed,” Northwestern University data scientist Luís A. Nunes Amaral told The New York Times. “Science relies on trusting what others did, so you do not have to repeat everything.”
A Matter of Trust
It’s under these clouds of doubt that Brian Nosek, co-founder and executive director of the Center for Open Science and his colleagues at the National Academies of Sciences, Engineering, and Medicine, argues that these debates focus on the wrong signals.
Instead of leaning on prestige or publication counts, the authors propose a practical framework for evaluating what makes research findings trustworthy.
The researchers start by identifying seven core components. Whether research is:
- Accountable,
- Evaluable,
- Evaluated,
- Well-formulated,
- Controls bias,
- Reduces error, and is
- Well-calibrated (whether the evidence supports the claims).
And “trustworthy” doesn’t necessarily mean “true.” Science advances through critique and corrections – all while swimming through uncertainty. Trustworthy findings are those that contribute to that process. And that holds even if subsequent research revises or upends those conclusions.
A Systemswide View of Trust
Instead of just looking at individual studies, the proposed framework adopts a systems perspective. Within that trustworthiness runs through three interconnected levels: the research itself, the researchers conducting it, and the institutions that shape incentives and provide oversight.
Accountability starts with ethical safeguards, conflict-of-interest disclosures, and proper attribution of contributions. Transparent reporting of funding sources and affiliations allows readers to assess potential bias. Institutions reinforce accountability through training, oversight mechanisms, and promotion criteria that elevates rigor over novelty.
Evaluability follows. Research must remain open to inspection. Sharing data, methods, materials, and code enables others to properly assess reproducibility and reliability. Publishing a paper, the authors note, isn’t enough. Transparency must extend throughout the underlying process.
Then comes evaluation. Independent peer review, replication, and public scholarly debate help drive alternative explanations and fix errors.
The paper highlights innovations such as Registered Reports, which shift editorial focus toward methodological quality.
Designing for Accuracy
The framework also stresses methodological rigor. Controlling bias demands tools such as randomization, blinding, preregistration, and validated measurement instruments. These curb systematic distortions that can skew findings.
Reducing error focuses on precision and reliability. Adequate sample sizes, instrument calibration, and statistical power remain essential for separating the signals from the noise. Underpowered studies drive up the risk of false positives while eroding confidence in findings.
Finally, well-calibrated claims ensure that the conclusions don’t ever outpace the evidence. Researchers should communicate uncertainty clearly and while overstating implications. Institutions can support this by rewarding methodological strength over sensational headlines.
Moving Beyond Prestige
The authors’ central critique is the scientific community’s reliance (maybe overreliance?) on proxy indicators. We all treat publication in a peer-reviewed journal as a de facto endorsement – seal of credibility that’s above reproach.
Yet the rigors of peer review can vary as much as the peers themselves. And journal reputation remains an imperfect substitute for methodological quality.
When we treat publication counts as little more than career currency, superficial markers of legitimacy can crowd out deeper markers of credibility. As such, the authors call for more direct, measurable signals – whether a study was preregistered, whether data are shared openly, whether claim limits are transparent – that reviewers can evaluate independently.
Developing such indicators won’t be easy – or simple. They demand investment, validation, and careful deployment. But the alternative – traveling further down this road that leaves trust to intuition, prestige, or ideological alignment – carries even greater risks.
In the end, the authors argue that trust in science can’t be handed out based on a brand name. Researchers must earn it through accountable processes, rigorous design, transparent scrutiny, and intellectual humility.