Something that made me happy this week, shared with permission: a junior researcher in our faculty asked for advice on whether or not to use a GenAI service for cross-validating his hand-coded systematic review results

I advised him to do a quickscan of the service against the core principles for research integrity in our faculty GenAI guidelines: honesty, scrupulousness, transparency, independence & reproducibility https://osf.io/preprints/osf/2c48n_v1

1/2
#GenAI #ethics

His assessment came back the next day.

He concluded the service —SciSpace, a ChatGPT application— was incompatible with at least three of the five core principles: it is not transparent about the underlying LLM and how data is processed, nor about input data & training; it makes the researcher dependent on a blackboxed for-profit service that seems designed for user lock-in; and it greatly complicates matters of responsibility and accountability.

2/3 (going to need one more)

Why this makes me happy: it shows that our guidance empowers researchers to make up their own mind and make principled choices based on clear values. There is no need to prescribe or prohibit particular solutions; a values-first perspective takes the professionalism of researchers seriously and enables them to make informed choices

Guidelines here: https://osf.io/preprints/osf/2c48n_v1

Research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility.

One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly.

A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment.

While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks.

A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science.
Research at our institution, from ideation and execution to analysis and reporting, is bound by the Netherlands Code of Conduct for Research Integrity. This code specifies five core values that organise and inform research conduct: Honesty, Scrupulousness, Transparency, Independence and Responsibility. One way to summarise the guidelines in this document is to say they are about taking these core values seriously. When it comes to using Generative AI in or for research, the question is if and how this can be done honestly, scrupulously, transparently, independently, and responsibly. A key ethical challenge is that most current Generative AI undermines these values by design [3–5; details below]. Input data is legally questionable; output reproduces biases and erases authorship; fine-tuning involves exploitation; access is gated; versioning is opaque; and use taxes the environment. While most of these issues apply across societal spheres, there is something especially pernicious about text generators in academia, where writing is not merely an output format but a means of thinking, crediting, arguing, and structuring thoughts. Hollowing out these skills carries foundational risks. A common argument for Generative AI is a promise of higher productivity [5]. Yet productivity does not equal insight, and when kept unchecked it may hinder innovation and creativity [6, 7]. We do not need more papers, faster; we rather need more thoughtful, deep work, also known as slow science.