Datagrom AI News Logo

OpenAI’s new reasoning AI models hallucinate more

OpenAI’s new reasoning AI models hallucinate more

April 18, 2025: New OpenAI Models Increase AI Hallucinations - OpenAI's latest models, o3 and o4-mini, hallucinate more than earlier versions, especially with reasoning tasks like PersonQA. Although they excel in coding and math, both models often make unfounded claims, with o3 hallucinating twice as much as its predecessors. This issue may be exacerbated by reinforcement learning, according to experts.

Real-world applications are affected by inaccuracies, such as fictitious web links. Despite these challenges, OpenAI is researching solutions, including web search integration, to improve accuracy. The AI industry's focus on reasoning models underscores the complexity of managing hallucinations.

Link to article Share on LinkedIn

Stay Current on AI in Minutes Weekly

Cut through the AI noise - Get only the top stories and insights curated by experts.

One concise email per week. Unsubscribe anytime.