A postmortem of three recent issues

moatmoat | 256 points

They must really be having a bad time if Anthropic of all labs is willing to share their infra details. On the actual precision bug, it is quite unfortunate on FMA side, numerical issues are often deeply bewildering and no AI can solve them (yet.) Also goes to show, if you are in a super crunch situation like this one (competitor literally eating your lunch every day), you need humans to understand what went wrong and even then it can take weeks to rectify.

am17an | 12 minutes ago

I’m pretty surprised that Anthropic can directly impact the infra for AWS Bedrock as this article suggests. That goes against AWSs commitments. I’m sure the same is true for Google Vertex but I haven’t digged in there from a compliance perspective before.

> Our own privacy practices also created challenges in investigating reports. Our internal privacy and security controls limit how and when engineers can access user interactions with Claude, in particular when those interactions are not reported to us as feedback.

Ok makes sense and glad to hear

> It remains particularly helpful for users to continue to send us their feedback directly. You can use the /bug command in Claude Code

Ok makes sense and I’d expect that a human can then see the context in that case although I hope it is still very explicit to the end user (I’m not a Claude Code user so I cannot comment)

> or you can use the "thumbs down" button in the Claude apps to do so

This is pretty concerning. I can’t imagine the average person equates hitting this button with forfeiting their privacy.

HoyaSaxa | 8 hours ago

With all due respect to the Anthropic team, I think the Claude status page[1] warrants an internal code red for quality. There were 50 incidents in July, 40 incidents in August, and 21 so far in September. I have worked in places where we started approaching half these numbers and they always resulted in a hard pivot to focusing on uptime and quality.

Despite this I'm still a paying customer because Claude is a fantastic product and I get a lot of value from it. After trying the API it became a no brainer to buy a 20x Max membership. The amount of stuff I've gotten done with Claude has been awesome.

The last several weeks have strongly made me question my subscription. I appreciate the openness of this post, but as a customer I'm not happy.

I don't trust that these issues are all discovered and resolved yet, especially the load balancing ones. At least anecdotally I notice that around 12 ET (9AM pacific) my Claude Code sessions noticeably drop in quality. Again, I hope the team is able to continue finding and fixing these issues. Even running local models on my own machine at home I run into complicated bugs all the time — I won't pretend these are easy problems, they are difficult to find and fix.

[1] https://status.anthropic.com/history

data-ottawa | 8 hours ago

> On August 29, a routine load balancing change unintentionally increased the number of short-context requests routed to the 1M context servers. At the worst impacted hour on August 31, 16% of Sonnet 4 requests were affected.

Interesting, this implies that the 1M context servers performs worst at low context. Perhaps this is due to some KV cache compression, eviction or sparse attention scheme being applied on these 1M context servers?

cyanf | 8 hours ago

> On August 25, we deployed a misconfiguration to the Claude API TPU servers that caused an error during token generation. An issue caused by a runtime performance optimization occasionally assigned a high probability to tokens that should rarely be produced given the context, for example producing Thai or Chinese characters in response to English prompts, or producing obvious syntax errors in code. A small subset of users that asked a question in English might have seen "สวัสดี" in the middle of the response, for example.

Can anyone explain to a layperson how this sort of thing is even possible for an LLM?

For normal code, of course stupid bugs happen all the time. You accidentally introduce an off-by-one error in a conditional, for example, or add an extra `goto fail`.

But LLMs aren't written by humans! Models are trained by automated programs over a period of many months across unfathomably massive data centers.

How would a human introduce a bug like the one described in TFA?

Wowfunhappy | 9 hours ago

> Incorrect routing affected less than 0.0004% of requests on Google Cloud's Vertex AI between August 27 and September 16.

Matches my experience. I use CC through our enterprise Vertex AI account and never noticed any degradation.

In general it seems like these bugs, while serious, were substantially less prevalent than anecdotal online reports would have you believe. We are really talking about a ~1-2 week window here where most issues were concentrated, a relatively small percentage of total requests and total users impacted.

extr | 9 hours ago

That is a very good start in sharing some level of information with their users, and kudos to the Anthropic team for doing that. However, I don't see any mention of the longstanding issue in CC of API timeout errors. And, at least for me, it's the most frustrating one.

dantodor | 7 hours ago

I do wonder what a random dip in quality causes in a long running conversation? Does the conversation recover at a later point, or does the introduction of temporary idiocy permanently affect the rest of the conversation?

Statistically, probably likely that the dip occurred at a point that wasn't too important? But what happens if the idiot comes out at a critical point?

Kind of reminds me of the two alternate ways that time travel works in sci-fi. Does the small change to the past explode like a fission reaction, or does history heal itself?

Anywho, if errors do accumulate, I can see being very pissed off even with temporary idiocy from the model, as it means it poisons the context for the entire rest of the conversation.

stephen_cagle | 7 hours ago

Vibe coding gone wrong?

woah | 7 hours ago

The value of figuring out how to make their LLM serving deterministic might help them track this down. There was a recent paper about how the received wisdom that kept assigning it to floating point associativity actually overlooked the real reasons for non-determinism [1].

[1] https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

vlovich123 | 8 hours ago

Big missing piece - what was the impact of the degraded quality?

Was it 1% worse / unnoticeable? Did it become useless? The engineering is interesting but I'd like to see it tied to actual impact

mulmboy | 5 hours ago

This reminds me of the story [1] about Facebook intentionally breaking parts of its Android app for some users (including crashing or disabling functionality), to see how far it could degrade before users stopped using Facebook.

According to reports, users did not stop coming back even when the app was broken for hours.

A similar thing happened to me when playing some initial version of The Binding of Isaac on Linux, when it was made with Flash. Its performance wasn't the best but I couldn't stop playing.

So if people still returns maybe Anthropic has something great going on with Claude Code.

[1]: https://www.theguardian.com/technology/2016/jan/05/facebook-...

yomismoaqui | 6 hours ago

Seems like Claude is using TPUs a lot more than I thought. For some reason I thought 90%+ of their capacity was from AWS.

OGEnthusiast | 9 hours ago

Which LLM can generate that timeline event graphic from text?

Omnipresent | 6 hours ago

If you are going to run a non deterministic system on three very different hardware platforms doesn't it behoove you to tell your users where their experience is coming from?

Calling the platforms A, B and C might help provide us the insight we're missing to spot incongruous behaviors faster than trying to aggregate more generalized feedback.

zer00eyz | 8 hours ago

And yet no offers of credits to make things right for the users, for what was essentially degraded performance of what you paid for.

I know I'll probably get push back on this, but it left a sour taste in my mouth when I paid for a $200 sub that felt like it was less useful than ChatGPT Plus ($20) at times.

Or to summarize: [south park "we're sorry" gif]

flutas | 9 hours ago

I don't believe for one second that response quality dropped because of an infrastructural change and remained degraded, unnoticed, for weeks. This simply does not pass the sniff test.

mvdtnz | 6 hours ago

Title should be fixed: it’s about Claude models in general, not Claude Code

stellalo | 9 hours ago

[dead]

unit149 | 2 hours ago

Reminder that Anthropic is the only AI company that has never released any open-source/weight models.

behnamoh | 8 hours ago

TL;DR — Anthropic Postmortem of Three Recent Issues

In Aug–Sep 2025, Claude users saw degraded output quality due to infrastructure bugs, not intentional changes.

The Three Issues 1. *Context window routing error* - Short-context requests sometimes routed to long-context servers.

   - Started small, worsened after load-balancing changes.
2. *Output corruption* - TPU misconfigurations led to weird outputs (wrong language, syntax errors).

   - Runtime optimizations wrongly boosted improbable tokens.
3. *Approximate top-k miscompilation* - A compiler bug in TPU/XLA stack corrupted token probability selection.

   - Occasionally dropped the true top token.
Why It Was Hard to Detect - Bugs were subtle, intermittent, and platform-dependent.

- Benchmarks missed these degradations.

- Privacy/safety rules limited access to real user data for debugging.

Fixes and Next Steps - More sensitive, continuous evals on production.

- Better tools to debug user feedback safely.

- Stronger validation of routing, output correctness, and token-selection.

moatmoat | 10 hours ago

> We don't typically share this level of technical detail about our infrastructure, but the scope and complexity of these issues justified a more comprehensive explanation.

Layered in aggrandizing. You host a service, people give you money.

bravetraveler | 9 hours ago

Wow. Sneaky. They do not even state the rate of impact for the XLA bug afaik, which affected everyone, not just claude code users, very vague. Interesting.

Claude code made almost half a billion so far[1] (>500m in ARR and its like 9 months old) , and 30% of all users have been impacted at least once, just from the first routing bug. Scary stuff.

Their post mortem is basically "evaluations are hard, we relied on vibe checking, now we are going to have even more frequent vibe checking". I believe it was indeed unintentional, but in the future where investor's money wont come down from the skies, serving distilled models will be very tempting. And you can not be liable to any SLA currently, it's just vibes. I wonder how enterprise vendors are going to deal with this going forward, you cannot just degrade quality without client or vendor even being able to really prove it.

[1][https://www.anthropic.com/news/anthropic-raises-series-f-at-...]

deepdarkforest | 9 hours ago