What would a language model trained on all of the literature present in 1500 look like? No doubt it would spout a lot about God and the geocentric model.
Would we expect it to ever develop the heliocentric model, Newtonian mechanics, or Ricardian economics?
Exponential growth is self-similar at all scales, so suggesting that LLMs can push the envelope of 21st century knowledge and beyond is akin to claiming that our hypothesised 1500s GPT could do the same in its own time.