They keep switching the rabbits running before the dogs on the AGI track. Now the doomers are worried not about one model reaching AGI but about agents from multiple models getting superintelligent.
Distributional AGI Safety
https://arxiv.org/pdf/2512.16856