Konrad Körding | Helping Human Scientists Do Better Science @ Vision Weekend Puerto Rico 2026
Why It Matters
By embedding rigorous self‑scrutiny into AI‑assisted workflows, the proposal could raise the overall quality of published research and mitigate the incentives that currently reward quantity over insight.
Key Takeaways
- •Scientists often ask poorly defined, low‑impact research questions.
- •AI tools tend to reinforce bad science without clear quality metrics.
- •Introducing deliberate “friction” can help researchers spot logical fallacies.
- •PlanYourScience.com offers a checklist‑driven interface to improve rigor.
- •Publication pressure fuels volume over value, undermining scientific credibility.
Summary
At Vision Weekend Puerto Rico 2026, Konrad Körding argued that the biggest obstacle to better science is not a lack of data or computing power, but the way human researchers formulate and test questions.
He observed that many academics pose poorly defined, low‑impact questions and then use standard statistical tricks that generate false positives—citing the XKCD “jelly‑bean acne” meme as an illustration. AI‑driven tools such as Prism, Future House, and Sapio promise to automate discovery, yet Körding warns they inherit the same bias because they are trained on published literature, which includes both groundbreaking and junk papers.
Körding’s solution is to add “friction” rather than remove it. His prototype, PlanYourScience.com, forces users to articulate the precise gap they aim to fill and runs a checklist of common failure modes—over‑generalization, missing controls, and other fallacies he has catalogued from a century of bad science. He cites AlphaFold as a genuine success, but dismisses hype around AI‑generated drug candidates that merely re‑discover existing work.
If adopted, this friction‑based approach could curb the flood of low‑value publications, reshape peer‑review incentives, and create a more reliable feedback loop between researchers and AI. Körding invites the community to test the tool, suggesting that a systematic audit of scientific reasoning may be the missing piece for truly intelligent research assistants.
Comments
Want to join the conversation?
Loading comments...