“Seven lies in four sentences”
Rumors that LLMs have solved then hallucination problem are greatly exaggerated.
9 months later—an eternity in current AI—it’s stilly common for LLMs to spout complete, utter bullshit.
LLMs are not fit for any sort of knowledge work