5/ In short, LLMs can learn representations that are structural and modular in both activation and weight space. But at the same time, they remain context sensitive - so they capture ways in which human cognition deviates from purely symbolic architectures. In this way, they can move forward this long standing debate by providing an example computational system that combines these properties.
I posted about Ellie Pavlick’s excellent talk on compositionality in #LLMs at #cogsci25 last week. I just saw that she is also giving this keynote #ccn2025 and anyone can watch it here:
I recommend it!
https://hva-uva.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=b26bd214-6afd-413e-898d-b2dc00787139