All AI models, especially large language models (LLMs), are prone to hallucinating—that is, they sometimes give wrong or fictitious responses that appear ...