Over-reliance on English hinders cognitive science
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(22)00236-4
#HackerNews #OverrelianceOnEnglish #CognitiveScience #LanguageBarrier #ResearchInsights
#Tag
Over-reliance on English hinders cognitive science
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(22)00236-4
#HackerNews #OverrelianceOnEnglish #CognitiveScience #LanguageBarrier #ResearchInsights
🧠 New paper by Aidan J. Horner (2025, Trends in Cognitive Sciences) introduces a 3D neural #StateSpace for #episodic memories. It replaces linear #SystemsConsolidation models with a dynamic framework where #hippocampal, #neocortical, and episodic specificity dimensions evolve independently and non-linearly, allowing memories to shift, reverse, or re-engage hippocampal circuits.
🌍 https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(25)00284-0
#Neuroscience #CognitiveScience #Hippocampus #CogSci #compneuro #memory
#Consciousness – The Brain’s #UserIllusion, Explained!🧠✨
What if consciousness isn’t the “captain” of our #thoughts — but just the dashboard we use to navigate them?
📽 Interview: https://youtu.be/M2qiVz95ZYk
📎 Information: https://philosophies.de/index.php/2023/12/25/naturalistic-view/
#Zoomposium #DanielDennett #Consciousness #UserIllusion #PhilosophyOfMind #CognitiveScience #Neuroscience #Qualia #FreeWill #ArtificialIntelligence #AGI #Embodiment #Functionalism #Materialism #Naturalism #MindAndMachine #Cognition #IllusionOfSelf
#Consciousness – The Brain’s #UserIllusion, Explained!🧠✨
What if consciousness isn’t the “captain” of our #thoughts — but just the dashboard we use to navigate them?
📽 Interview: https://youtu.be/M2qiVz95ZYk
📎 Information: https://philosophies.de/index.php/2023/12/25/naturalistic-view/
#Zoomposium #DanielDennett #Consciousness #UserIllusion #PhilosophyOfMind #CognitiveScience #Neuroscience #Qualia #FreeWill #ArtificialIntelligence #AGI #Embodiment #Functionalism #Materialism #Naturalism #MindAndMachine #Cognition #IllusionOfSelf
From Nao Tsuchiya @ Monash University
YouTube videos for the 2025 summer school for Qualia Structure & Integrated Information Theory
Dear all, (who contributed to our summer school projects in the past!)
We are starting to upload 2025 Qualia Structure IIT summer school lecture videos in this Youtube Channel
https://www.youtube.com/watch?v=cwPGZ1CacVU&list=PLEP8weJRxEPbn_D22O-RmWIZbxOPDO-Vu
PLEASE spread the words to your local networks (email list, social media, etc).
It would be really great if we can reach STUDENTS (from high school to PhD), early career research or more. (Nowadays, it's very difficult to reach the relevant audience!)
We are particularly interested in reaching the younger generation, who might be interested in joining the future summer school (we will organize it again, possibly in 2026 Sep or 2027 Feb).
For those who have not been exposed to the basic and interdisciplinary consciousness research, Day 1 talk by Christof Koch will be a great introduction.
For those who are interested in Integrated Information Theory, the gentle introduction by Matteo Grasso will be particularly informative.
#connectionist #consciousness #videos #CognitiveScience #CogSci #Summerschool
A lot of #MachineLearning and #PredictiveModelling in #statistics is based on minimisation of loss with respect to a training data set. This assumes that the training data set as a whole is representative of potential training sets. Consequently, this implies that loss minimisation is not an appropriate approach (or way of conceptualising the problem) in problems where the training data sets are not representative of the potential testing sets. (As a working title, let's call this issue "radical nonstationarity".)
I recently read Javed & Sutton 2024 "The Big World Hypothesis and its Ramifications for Artificial Intelligence" (https://web.archive.org/web/20250203053026/https://openreview.net/forum?id=Sv7DazuCn8) and think it describes a superset of this issue of radical nonstationarity. I strongly recommend this paper for motivating why loss minimisation with respect to a training data set might not always be appropriate.
Imagine an intelligent agent existing over time in a "big world" environment. Each observation records information about a single interaction of the agent with it's environment, and this observation only records the locally observable part of the environment. The agent may be moving between locations in the environment that are radically different with respect to the predictive relationships that exist and the variables that are predictive of the outcome of interest may vary between observations. Nonetheless, there is some predictive information that an intelligent agent could exploit. The case where everything is totally random and unpredictable is of no interest when the focus of research is an intelligent agent. In such a world minimising loss with respect to the history of all observations seen by the agent or even a sliding window of recent history seems irrelevant to the point of obtuseness.
One possible approach to this issue might be for the agent to determine, on a per-observation basis, the subset of past observations that are most relevant to making a prediction for the current observation. Then loss minimisation might play some role in determining or using that subset. However, that use of a dynamically determined training set is not the same thing as loss minimisation with respect to a statically given training set.
I am trying to find pointers to scholarly literature that discusses this issue (i.e. situations where minimisation of loss with respect to some "fixed" training set). My problem is that I am struggling to come up with search terms to find them. So:
* Please suggest search terms that might help me find this literature
* Please provide pointers to relevant papers
#PhilosophyOfStatistics #PhilosophyOfMachineLearning #CognitiveRobotics #MathematicalPsychology #MathPsych #CognitiveScience #CogSci #CognitiveNeuroscience #nonstationarity #LossMinimisation
First was an incredible talk by Moira Dillon on interrogating human spatial and social cognition at the Kempner Institute at Harvard University. Using ingenious studies with infants, children, and adults across cultures Dillon masterfully characterizes how we think about space and social agents. Highly recommend https://www.youtube.com/watch?v=YcgOj1wZVkg (2/6) #psychology#CognitiveScience
For everyone who can not attend the CCN Conference this year in amsterdam, all keynote lectures can be streamed here:
https://2025.ccneuro.org/keynote-lectures/
Full schedule with livestream links here:
https://2025.ccneuro.org/schedule-of-events/
First off, Nancy Kanwisher at 11.30 am (CET)
Edit: Not only keynotes but also symposia can be live streamed 🙂
#ccn2025 #neuroscience #cognitivescience #computationalneuroscience#CompNeuro
For everyone who can not attend the CCN Conference this year in amsterdam, all keynote lectures can be streamed here:
https://2025.ccneuro.org/keynote-lectures/
Full schedule with livestream links here:
https://2025.ccneuro.org/schedule-of-events/
First off, Nancy Kanwisher at 11.30 am (CET)
Edit: Not only keynotes but also symposia can be live streamed 🙂
#ccn2025 #neuroscience #cognitivescience #computationalneuroscience#CompNeuro
🧠💻 A team from the Mind, Brain and Behavior Research Center (CIMCYC, cimcyc.bsky.social) published a #programming guide aimed at students in #psychology and #cognitive #neuroscience. This evolving set of #tutorials offers a curated collection of conceptual reflections, practical examples, and methodological recommendations. The material is available in #Python, #RStats, and #MATLAB.
🌍 https://wobc.github.io/programming_book/
#CognitiveScience #OpenScience
🧠💻 A team from the Mind, Brain and Behavior Research Center (CIMCYC, cimcyc.bsky.social) published a #programming guide aimed at students in #psychology and #cognitive #neuroscience. This evolving set of #tutorials offers a curated collection of conceptual reflections, practical examples, and methodological recommendations. The material is available in #Python, #RStats, and #MATLAB.
🌍 https://wobc.github.io/programming_book/
#CognitiveScience #OpenScience
A space for Bonfire maintainers and contributors to communicate