Why AI Gives Different Answers to the Same Question
If you've used ChatGPT, Claude, or Gemini for anything important, you've probably noticed something unsettling: the answers change. You ask the same question twice and get two different responses. You compare ChatGPT's answer with Claude's and they contradict each other. You revisit a topic a week later and the model seems to have changed its mind entirely.
This isn't a bug. It's built into the fundamental architecture of how large language models work. Every response is generated through a probabilistic process — the model isn't retrieving a fixed answer from a database, it's predicting the most likely sequence of words given your input. And "most likely" can shift based on dozens of variables you never see.
A March 2026 study from Washington State University measured this directly. Researchers asked ChatGPT the exact same question 10 times and found it gave consistent answers only about 73% of the time. After adjusting for random chance, the AI's performance was only about 60% better than guessing — earning what the researchers characterized as closer to a "low D" than reliable performance.
The core issue: AI doesn't "know" things. It generates things. Every response is a new creative act shaped by probability — not a lookup in a fact table. Understanding this distinction is essential to using AI effectively.
The 6 Causes of AI Inconsistency
AI inconsistency comes from six distinct sources. Some are intentional design features. Others are technical side effects. All of them affect the reliability of the answers you receive.