5/ In short, LLMs can learn representations that are structural and modular in both activation and weight space. But at the same time, they remain context sensitive - so they capture ways in which human cognition deviates from purely symbolic architectures. In this way, they can move forward this long standing debate by providing an example computational system that combines these properties.