Research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility.
One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly.
A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment.
While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks.
A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science.