venturebeat.com 4 days ago URGENCY: 5/10

Anthropic Addresses Claude's Performance Issues

Anthropic reveals the reasons behind Claude's recent performance decline, sparking widespread concern among AI users. Discover how these changes impacted the model's capabilities and what the company is doing to rectify the situation.

Share
Anthropic Addresses Claude's Performance Issues

Understanding Claude's Decline

In recent weeks, developers and AI enthusiasts have voiced concerns over Anthropic's Claude model, claiming it has suffered from a phenomenon dubbed "AI shrinkflation." Users reported a decline in reasoning capabilities, increased hallucinations, and inefficient token usage, leading to a significant trust gap in the model's reliability.

Anthropic has now published a technical post-mortem, identifying three key changes that contributed to these issues. They clarified that while the model's core weights remained intact, adjustments to the surrounding harness inadvertently affected performance. Key findings include:

  • Default Reasoning Effort Change: The reasoning effort was lowered from high to medium to address UI latency, impacting Claude's depth of reasoning.
  • Verbosity Prompt Adjustments: Changes made to how verbose responses were generated also contributed to the perceived decline.
  • Caching Bug Fixes: A bug in version v2.1.116 was identified and addressed, which had further complicated performance.

With these insights, Anthropic aims to restore user confidence and improve Claude's functionality moving forward. The company emphasizes its commitment to maintaining high standards in AI performance.