How user prompt style can trigger AI hallucinations

A new study indicates that users themselves often trigger AI “hallucinations.” Published on October 3 on arXiv.org, the paper “Mind the Gap: Linguistic Divergence and Adaptation Strategies in Human-LLM Assistant vs. Human-Human Interactions” finds that the way a prompt is phrased directly shapes the appearance of fabricated facts, quotations, or sources in AI responses.

The researchers analyzed more than 13,000 human-to-human dialogues and over 1,300 human-to-chatbot interactions. They observed that when people address an AI system, they write differently: more briefly, with weaker grammar, more brusquely, and with a narrower vocabulary. While the substance may remain the same, the style changes markedly—what the authors describe as a clear style shift.

This mismatch becomes a problem because large language models are trained on polite, well-formed text. Consequently, blunt or careless wording can be read ambiguously and nudge the system toward making things up. It’s a pattern many will recognize from everyday exchanges with bots: small shifts in tone or clarity can tilt the outcome.

Possible solutions

The team explored several remedies. One is training models on a broader range of speech styles, which increases the accuracy of understanding user intent by 3%. Another is automatic prompt paraphrasing, but that approach reduced quality because emotional and contextual nuances were lost.

Key takeaway

The authors conclude that users can lower the risk of invented answers by writing prompts that are fuller, more grammatical, and more courteous—bringing AI chats closer to ordinary human conversation. It’s a modest habit worth adopting.