This video from a team lead at Google is fascinating.
Here's someone who has clearly seen the negative consequences of LLM adoption in software teams. He seems like a conscientious manager, talking about how it makes some people struggle, or resistant, and damages the social fabric of a team.
Yet, he's stuck in a frame that "AI is the future of work" and "AI makes better decisions," so everything he has to say about it is in terms of how to drive adoption, even when, he admits, it may have serious negative consequences.
The cognitive dissonance is striking, and familiar. He's working very hard to justify this policy of LLM use. He wants to "put his people first." The paradox comes from trying to force both of these views at the same time.
What if we just didn't force LLMs on people? What if it's not a net benefit? What if overcoming people's resistance means getting worse results? It feels like he won't let himself consider that possibility.