

So tue tldr is just what we already knew: LLMs predict the most likely word to come next and have no concept of “true” or “false” information.
Indeed, to have such a concept would require understanding that information and any AI that actually understood information wouldn’t be an LLM because LLMs are just fancy autocorrect.

I was referring to The Lord of the Rings.