Wondering if I understood LLMs correctly. Like it is a stochastic parrot, you can use it to create texts that sound plausible.
But there isnt any sort of proofing algorithm.
Can any GPT actually look for grammar mistakes? spelling errors? what is important or the main message of a text?if any of the statements in a text are true?
if any of the sources exist?
Please advise with any papers that would show that using an LLM for any of those would work.