5/ In short, LLMs can learn representations that are structural and modular in both activation and weight space. But at the same time, they remain context sensitive - so they capture ways in which human cognition deviates from purely symbolic architectures. In this way, they can move forward this long standing debate by providing an example computational system that combines these properties.

I posted about Ellie Pavlick’s excellent talk on compositionality in at last week. I just saw that she is also giving this keynote and anyone can watch it here:

I recommend it!

hva-uva.cloud.panopto.eu/Panop

#cogsci25 a great talk by Ellie Pavlick on ‘emergent compositionality in neural networks’:

Compositionality in language and thought has been one of the long running debates in cognitive science. It refers to the way complex meanings are established from component parts. Specifically, it’s the idea that the meaning of a complex unit can be derived solely from the meanings of its parts: e.g., the meaning of “black cat” can be built up directly from the meanings of “black” and “cat”.
🧵

#cogsci25 I’m really pleased to see Doug Medin announced as the winner of the CogSci society’s 26th Rumelhart Prize

I started out doing research on concepts and categorization and his work in that area is foundational, but he then also went on to do really important work on cross-cultural cognition.

https://cognitivesciencesociety.org/rumelhart-prize/

I very much enjoyed her talk. It was nice to see her finish by highlighting the value of the inter-disciplinarity that characterises cognitive science. It’s that coming together of different disciplinary perspectives that has been of the main attractions of being a cognitive scientist for me… #cogsci25

2/

another great talk #cogsci25 yesterday was by Sean Trott on "Do we know enough to know what language models know" on the difficulties in trying to make sense of LLMs.

One of the most useful things I thought was his point that we need to think more clearly about what it would mean if an LLM passes a human behavioural test, say a theory of mind test:
-do we bite the bullet and acknowledge the capacity
- do we reject the capacity regardless (if yes why?)
- do we change our views on the construct validity of the test for either machines or machines and humans?

I think these are questions to think about in advance that could yield a lot of conceptual and methodological clarification

just heard a really interesting talk by Ruoxi Qi from the University of Hong Kong about bias in LLMs.

They investigated LLMs bias toward WEIRD values by prompting LLMs and comparing their answers to World Values Survey (WVS) data (Haerpfer et
al., 2022). The WVS contains questions about human values and data from large representative samples from different parts of the world.

As expected they found bias toward WEIRD but also bias toward East Asia and Russia, presumably reflecting balance in the training data. In fact, whether a country was rich or not, was the best predictor of bias.

A really nice summary plot of their results from the paper is the Fig. 4 heatmap overlaid with clustering results that plots distance between model distribution and WSV distributions as a measure of value alignment!

escholarship.org/content/qt87d