🧠 New paper by Huang et al.: By using #pharmacological #fMRI and dynamic #connectome-based #PredictiveModeling, they show how #cortisol reshapes whole-brain #NetworkDynamics during emotional memory encoding. Trial-level analyses reveal distinct but increasingly integrated #arousal and #memory networks under #stress, supporting a hormonally driven "memory formation mode".
Are more reflective thinkers better at games that require empathy or perspective-taking?
In two experiments, reflective thinkers performed better on such games, seemingly because they paid more attention to the other players' incentives.
https://doi.org/10.1017/S1930297500007373
#econ #cogSci #psychology #relationships #negotiation #diplomacy #geopolitics #intelligence #defense #security #policy
🧠🏔️ I’m sharing presentations from the Society for Judgment and Decision Making conference in #Denver at the URL below:
https://bsky.app/profile/byrdnick.com/post/3m6agtctbhc2b
My poster is about #argumentMapping and #learningScience. You will also find presentations about how to advance #cogSci with #AI tools, do #ProcessTracing in #Qualtrics without #coding, and avoid backfiring in #healthcare #nudges.
Follow to fight FOMO and enjoy #openAccess conferencing.
🧠 New paper by Aidan J. Horner (2025, Trends in Cognitive Sciences) introduces a 3D neural #StateSpace for #episodic memories. It replaces linear #SystemsConsolidation models with a dynamic framework where #hippocampal, #neocortical, and episodic specificity dimensions evolve independently and non-linearly, allowing memories to shift, reverse, or re-engage hippocampal circuits.
🌍 https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(25)00284-0
#Neuroscience #CognitiveScience #Hippocampus #CogSci #compneuro #memory
Prayer's effects on health have been studied many times.
Small studies initially reported some benefit, but more rigorous studies found null or even harmful effects.
In sum, prayer seemed ineffective, recommending focus on more promising interventions: https://doi.org/10.1002/14651858.CD000368.pub3
Notably, some of the investigators had "a prior belief in the positive effects of #prayer" (Benson and Byrd) and nonetheless found plenty of null results, or results that hovered on either side of the null (p. 16, 👆).
🔓 https://pmc.ncbi.nlm.nih.gov/articles/PMC7034220/pdf/CD000368.pdf
We are delighted to announce that the new, multidisciplinary Diamond #OpenAccess journal **Replication Research (R2)** is now accepting submissions for replications and reproductions in a broad range of disciplines including #linguistics, #CogSci, and #DigitalHumanities! 🎉
https://www.uni-muenster.de/Ejournals/index.php/replicationresearch/index
From Nao Tsuchiya @ Monash University
YouTube videos for the 2025 summer school for Qualia Structure & Integrated Information Theory
Dear all, (who contributed to our summer school projects in the past!)
We are starting to upload 2025 Qualia Structure IIT summer school lecture videos in this Youtube Channel
https://www.youtube.com/watch?v=cwPGZ1CacVU&list=PLEP8weJRxEPbn_D22O-RmWIZbxOPDO-Vu
PLEASE spread the words to your local networks (email list, social media, etc).
It would be really great if we can reach STUDENTS (from high school to PhD), early career research or more. (Nowadays, it's very difficult to reach the relevant audience!)
We are particularly interested in reaching the younger generation, who might be interested in joining the future summer school (we will organize it again, possibly in 2026 Sep or 2027 Feb).
For those who have not been exposed to the basic and interdisciplinary consciousness research, Day 1 talk by Christof Koch will be a great introduction.
For those who are interested in Integrated Information Theory, the gentle introduction by Matteo Grasso will be particularly informative.
#connectionist #consciousness #videos #CognitiveScience #CogSci #Summerschool
A lot of #MachineLearning and #PredictiveModelling in #statistics is based on minimisation of loss with respect to a training data set. This assumes that the training data set as a whole is representative of potential training sets. Consequently, this implies that loss minimisation is not an appropriate approach (or way of conceptualising the problem) in problems where the training data sets are not representative of the potential testing sets. (As a working title, let's call this issue "radical nonstationarity".)
I recently read Javed & Sutton 2024 "The Big World Hypothesis and its Ramifications for Artificial Intelligence" (https://web.archive.org/web/20250203053026/https://openreview.net/forum?id=Sv7DazuCn8) and think it describes a superset of this issue of radical nonstationarity. I strongly recommend this paper for motivating why loss minimisation with respect to a training data set might not always be appropriate.
Imagine an intelligent agent existing over time in a "big world" environment. Each observation records information about a single interaction of the agent with it's environment, and this observation only records the locally observable part of the environment. The agent may be moving between locations in the environment that are radically different with respect to the predictive relationships that exist and the variables that are predictive of the outcome of interest may vary between observations. Nonetheless, there is some predictive information that an intelligent agent could exploit. The case where everything is totally random and unpredictable is of no interest when the focus of research is an intelligent agent. In such a world minimising loss with respect to the history of all observations seen by the agent or even a sliding window of recent history seems irrelevant to the point of obtuseness.
One possible approach to this issue might be for the agent to determine, on a per-observation basis, the subset of past observations that are most relevant to making a prediction for the current observation. Then loss minimisation might play some role in determining or using that subset. However, that use of a dynamically determined training set is not the same thing as loss minimisation with respect to a statically given training set.
I am trying to find pointers to scholarly literature that discusses this issue (i.e. situations where minimisation of loss with respect to some "fixed" training set). My problem is that I am struggling to come up with search terms to find them. So:
* Please suggest search terms that might help me find this literature
* Please provide pointers to relevant papers
#PhilosophyOfStatistics #PhilosophyOfMachineLearning #CognitiveRobotics #MathematicalPsychology #MathPsych #CognitiveScience #CogSci #CognitiveNeuroscience #nonstationarity #LossMinimisation
We are delighted to announce that the new, multidisciplinary Diamond #OpenAccess journal **Replication Research (R2)** is now accepting submissions for replications and reproductions in a broad range of disciplines including #linguistics, #CogSci, and #DigitalHumanities! 🎉
https://www.uni-muenster.de/Ejournals/index.php/replicationresearch/index
Encouraged by the reaction to yesterday's (very short) post : https://tomstafford.substack.com/p/ai-will-be-the-biro-of-thought
I have a set of similar pre-baked talking points on how to make sense of the AI/LLM revolution, so this may be the first in a series
Encouraged by the reaction to yesterday's (very short) post : https://tomstafford.substack.com/p/ai-will-be-the-biro-of-thought
I have a set of similar pre-baked talking points on how to make sense of the AI/LLM revolution, so this may be the first in a series
Sad news: “BODEN — Professor Margaret (Maggie) Boden, renowned cognitive scientist and long-time member of the University of Sussex, died peacefully in Brighton on 18th July 2025, aged 88.”
https://www.theargus.co.uk/memorials/death-notices/death/30683058.margaret-maggie-boden/notice/
Sad news: “BODEN — Professor Margaret (Maggie) Boden, renowned cognitive scientist and long-time member of the University of Sussex, died peacefully in Brighton on 18th July 2025, aged 88.”
https://www.theargus.co.uk/memorials/death-notices/death/30683058.margaret-maggie-boden/notice/
3/ and here is. link to both their talks and some discussion - both well worth watching. Marcel Binz thread….
4/ and the Eric Schulz talk and thread