If a statistical test always gets produces the same result and if its results are often (usually?) described fallaciously, should the test be abandoned?
If so, we may need to abandon p-curve analysis:
If a statistical test always gets produces the same result and if its results are often (usually?) described fallaciously, should the test be abandoned?
If so, we may need to abandon p-curve analysis:
#AI may exacerbate bad academic habits:
Science summaries from large language models ( #LLMs) were nearly five times more likely than human-authored summaries to contain broad generalizations (95% CI [3.06, 7.70], p < 0.001).
And newer language models over-generalized more; not less!
Generalization bias in large language model summarization of scientific research
New job offer at ZPID Trier:
Junior professorship (W1-equivalent) Psychological Metascience (with tenure-track to W2 LBesG-equivalent). https://leibniz-psychology.onlyfy.jobs/job/10kku5n7
Nice to see another job ad that expects "Adherence to the principles of open science“.
New job offer at ZPID Trier:
Junior professorship (W1-equivalent) Psychological Metascience (with tenure-track to W2 LBesG-equivalent). https://leibniz-psychology.onlyfy.jobs/job/10kku5n7
Nice to see another job ad that expects "Adherence to the principles of open science“.
arXiv will no longer accept review articles and position papers unless they have been accepted at a journal or a conference and complete successful peer review.
This is due to being overwhelmed by a hundreds of AI generated papers a month.
Yet another open submission process killed by LLMs.
@carnage4life
What a pain.
I fully agree with the objective of suppressing AI generated slop, but the mechanism of insisting on peer review seems entirely contrary to the point of arXiv being a *preprint* service.
Plus, there is value in the diversity of content in preprints, which gets reduced by standard formats and typical publication venues. Peer review isn't necessarily good at promoting ideas outside the bandwagon-du-jour.
New paper alert! #statistics #metascience "On the poor statistical properties of the P-curve meta-analytic procedure" in JASA. https://raw.githubusercontent.com/richarddmorey/MoreyDavisStober_pcurveASA/refs/heads/main/text/asa_article/Morey_Davis-Stober_2025_JASA_with_supplement.pdf
We show that the "P curve" meta-analysis tests have terrible statistical properties, in spite of being used for over a decade to tell "bad" science from "good". The initial tests should never have made it through peer review. They suffer from extreme sensitivity, arbirary conclusions, inadmissibility, nonmonotonicity in the evidence, and inconsistency in estimation. We recommend they not be used, and that better vetting is needed for methods in metascience.
Journal link: https://www.tandfonline.com/doi/full/10.1080/01621459.2025.2544397
I recently read @djnavarro 's 2021 paper "If Mathematical Psychology Did Not Exist We Might Need to Invent It: A Comment on Theory Building in Psychology" (https://doi.org/10.1177/1745691620974769).
It's a gem on the role and use of theory in cognitive psychology (and related fields, by extension) and the relation of theory to statistics. As expected, the footnotes are a joy. For my extra reading pleasure, I imagined the paper written in Danielle's sweary-blog style.
#theory #CognitivePsychology #CogPsych #MathematicalPsychology #MathPsych #MetaScience #paper #paper
If Mathematical Psychology Did Not Exist We Might Need to Invent It: A Comment on Theory Building in Psychology
Following a wonderful workshop at FSCI 2025, the @force11 team is organizing a PREreview Club to review #metascience #preprints.
They are currently looking for participants to commit at various levels of engagement to help start up the club, especially people willing to help recruit people to fill roles for each review, and people who can commit to roles, such as notetaker or article selection.
If that sounds like something you'd be interested in, read more here: https://force11.org/post/organizing-a-force11-prereview-club/
New paper alert! #statistics #metascience "On the poor statistical properties of the P-curve meta-analytic procedure" in JASA. https://raw.githubusercontent.com/richarddmorey/MoreyDavisStober_pcurveASA/refs/heads/main/text/asa_article/Morey_Davis-Stober_2025_JASA_with_supplement.pdf
We show that the "P curve" meta-analysis tests have terrible statistical properties, in spite of being used for over a decade to tell "bad" science from "good". The initial tests should never have made it through peer review. They suffer from extreme sensitivity, arbirary conclusions, inadmissibility, nonmonotonicity in the evidence, and inconsistency in estimation. We recommend they not be used, and that better vetting is needed for methods in metascience.
Journal link: https://www.tandfonline.com/doi/full/10.1080/01621459.2025.2544397