@egonw 2/2 A crucial question is which, if any, of the evaluation criteria need satisfying scores for all individual journals for the deal to be able to go ahead. That will likely be limited to formal and often binary criteria (OA, license, peer review) but not for criteria at ordinal/ratio level (diversity of editorial board, rejections, retractions, visibility and citedness, review duration etc.) or qualitative ones (constructiveness of reviews etc.).
Discussion
@egonw No, as deals are evaluated and negotiated at deal level, so for 100s or 1000s of journals in one go. And also on multiple criteria. On aspects where all journals are the same (e.g. all have peer review) you can draw conclusion on journal level, but where they differentiate (e.g. using expert statistical reviewing) clearly not. And while at overall level score on multiple criteria can satisfying, specific journals within the deal might score badly on the majority of criteria. 1/2
@egonw 2/2 A crucial question is which, if any, of the evaluation criteria need satisfying scores for all individual journals for the deal to be able to go ahead. That will likely be limited to formal and often binary criteria (OA, license, peer review) but not for criteria at ordinal/ratio level (diversity of editorial board, rejections, retractions, visibility and citedness, review duration etc.) or qualitative ones (constructiveness of reviews etc.).
@jeroenbosman I wonder if the social sciences have ever studied how Dutch researchers rank journals in their own field, with a main goal to develop a quality-based ranking. But I can also imagine the closely related research what Dutch scientists actually say defines the quality for them. Plenty of anecdotes a few years ago, but a systematic study would inform the UKB in their negotiations with the expensive publishers