Next was a fantastic talk by David Chiang on what transformers can and can't do at the USC Information Sciences Institute. I admit I'm a sucker for talks like this, which relate neural networks to well understood formal models to probe their theoretical limits. Here Chiang convincingly demonstrates how a formal logic model is equivalent to some transformers, then showing how they will literally never be able to do certain essential tasks reliably. Highly recommend https://www.youtube.com/watch?v=GVdLh-6wEJo (5/6)