Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Serhii Nazarovets
@serhii@mstdn.science  ·  activity timestamp 2 weeks ago

If an AI is told to "follow" a certain academic paradigm, will it rate papers differently? 🤔 A new study by Mike Thelwall et al. shows: yes. Across 8 paradigm pairs and 1,490 papers, #ChatGPT scored higher when aligned and lower when opposed, quietly penalizing ideas outside its frame:

📄 https://arxiv.org/abs/2510.22426

To me, it’s a warning - #AI trained on dominant views can undermine pluralism and create a technical illusion of one truth.

#AIethics #LLM #Sociology #Paradigms

arXiv.org

Can ChatGPT be a good follower of academic paradigms? Research quality evaluations in conflicting areas of sociology

Purpose: It has become increasingly likely that Large Language Models (LLMs) will be used to score the quality of academic publications to support research assessment goals in the future. This may cause problems for fields with competing paradigms since there is a risk that one may be favoured, causing long term harm to the reputation of the other. Design/methodology/approach: To test whether this is plausible, this article uses 17 ChatGPTs to evaluate up to 100 journal articles from each of eight pairs of competing sociology paradigms (1490 altogether). Each article was assessed by prompting ChatGPT to take one of five roles: paradigm follower, opponent, antagonistic follower, antagonistic opponent, or neutral. Findings: Articles were scored highest by ChatGPT when it followed the aligning paradigm, and lowest when it was told to devalue it and to follow the opposing paradigm. Broadly similar patterns occurred for most of the paradigm pairs. Follower ChatGPTs displayed only a small amount of favouritism compared to neutral ChatGPTs, but articles evaluated by an opposing paradigm ChatGPT had a substantial disadvantage. Research limitations: The data covers a single field and LLM. Practical implications: The results confirm that LLM instructions for research evaluation should be carefully designed to ensure that they are paradigm-neutral to avoid accidentally resolving conflicts between paradigms on a technicality by devaluing one side's contributions. Originality/value: This is the first demonstration that LLMs can be prompted to show a partiality for academic paradigms.
Average scores given to an article by ChatGPT based on its paradigmatic position relative to the article assessed. Error bars illustrate 95% confidence intervals. All differences are statistically significant. The theoretical score range is 1* to 4*. An “Antagonistic follower” supports the opposing paradigm of the article. https://arxiv.org/abs/2510.22426
Average scores given to an article by ChatGPT based on its paradigmatic position relative to the article assessed. Error bars illustrate 95% confidence intervals. All differences are statistically significant. The theoretical score range is 1* to 4*. An “Antagonistic follower” supports the opposing paradigm of the article. https://arxiv.org/abs/2510.22426
Average scores given to an article by ChatGPT based on its paradigmatic position relative to the article assessed. Error bars illustrate 95% confidence intervals. All differences are statistically significant. The theoretical score range is 1* to 4*. An “Antagonistic follower” supports the opposing paradigm of the article. https://arxiv.org/abs/2510.22426
  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login