Claude Code Is Getting Worse, So Say These Users
Over the past few weeks, one complaint has been showing up again and again on X: Claude Code no longer feels as reliable as it used to.
This is not just the usual background grumbling that follows any AI tool. More developers are saying the quality feels off – messier output, rougher UX, weaker trust, and faster-burning limits.

What makes this worth paying attention to is not one angry post. It is the growing pile of them.
As of early April 2026, public sentiment around Anthropic’s Claude Code has clearly turned more negative. Some long-time users now say it needs constant supervision. Others are comparing it unfavorably to alternatives like Codex or questioning whether the subscription still feels worth it.
That does not automatically mean Claude Code is broken across the board. Public timelines amplify bad experiences. But when the same complaints keep repeating, they stop looking like noise. If you’ve been following the broader wave of AI-assisted coding tools like Cursor, this kind of trust wobble stands out fast because the whole promise depends on momentum, not just raw model output.
What Users Are Complaining About
The complaints cluster around four recurring themes.
1. Performance Feels Worse
The broadest complaint is simple: Claude Code no longer feels as dependable as it did a few weeks ago. Tasks that once worked in one pass now take multiple tries, and the baseline feels shakier than before. That matters even more in setups where developers expect the assistant to stay close to the project instead of drifting, whether they are working in cloud tools or local workflows such as running AI assistance through VS Code with LM Studio.
They’re trying so hard to kill OpenClaw that they made Claude Code unusable. I’m done. I give up.
— Theo (@theo) April 6, 2026
I really was becoming an anthropic shill and these last 2 weeks have been terrible. Claude Code going into the shitter and my $200 max plan powering Hermes is now useless. Cool guys.
— 0xPBIT (@0xPBIT) April 6, 2026
Claude code quality as degraded so bad since the leak! What the actual fuck is going on at Anthropic?
— johnsontrades_ (@johnsontrades_) April 5, 2026
Something has felt off with Claude these past few days… Code quality on complex agentic tasks has dropped noticeably… the floor has clearly dropped.
— stfvsl (@stfvsl) April 5, 2026
I’ve been an avid Claude Code user and totally agree, over the last few weeks it feels like it got slower and code quality has dropped.
— Hermes (@hermes_f) April 5, 2026
2. Code Quality Has Slipped
The more expensive complaint is not that Claude Code is occasionally wrong, but that it is wrong in ways that create cleanup work: overly complex output, poor judgment, and changes that are harder to trust.
Claude Code feels like a startup god repo that needs refactoring, urgently. So if you want to learn from it the code quality is not where I would focus on.
— PneumaticDill (@PneumaticDill) April 6, 2026
3. The User Experience Feels Rougher
Even when the raw output is acceptable, the experience itself sounds rougher: vague failures, low visibility, awkward regressions, and less confidence that the tool will behave consistently inside a real workflow. Once that happens, developers stop treating the assistant like a collaborator and start treating it like an unstable tool that needs supervision.
Is it just me, or does Claude Code just sucks today? I am really confused here!
— Mayukh Bagchi (@MayukhBagchi4) April 6, 2026
claude code falls back to Accept Edits rather than Bypass Restrictions… which makes the remote-control iOS experience miserable… Can you please consider fixing this regression?
— drivelinekyle (@drivelinekyle) April 6, 2026
4. Heavy Users Feel Burned
The subscription pain is its own category. People can live with usage caps if the tool delivers. They get much less forgiving when limits hit faster while the results feel worse.
Claude code is unusable, a single simple prompt spikes the session limit to 10% – 15%
— oragecatz (@oragecatz) April 6, 2026
I have 2 claude code max subs that both run out in a 5 hour window now. I dont even do too many advanced things with them. Just vanilla claude code.
— nikita_builds (@nikita_builds) April 6, 2026
What Might Be Going On?
From the outside, it is hard to know.
Maybe this is a temporary regression during rapid updates. Maybe there are context-window tradeoffs, model-routing changes, or compute optimizations that are hurting real-world coding quality. Maybe the product is being pushed into more complex use cases than it can reliably support right now. Maybe expectations rose faster than the product matured.
Could it also be a visibility effect? Sure. Once a few influential developers start complaining, others tend to share their own bad experiences more readily.
But even that does not fully explain away the volume.
When many users independently describe the same pattern – slower runs, weaker code, rougher UX, more supervision – it usually points to something real enough to matter.
Why This Matters
Claude Code is not just another chatbot people use for fun. For many developers, it sits directly inside their workflow. When a tool like that slips, the cost is not just annoyance. It is lost time, broken flow, and less trust in every suggestion that follows.
That is why public sentiment matters here.
A coding assistant does not need to be perfect to be useful. But it does need to be dependable enough that developers feel faster with it than without it. Once that flips, even temporarily, people start reaching for alternatives.
And that seems to be exactly what some are doing.
The Bigger Question
The more interesting question is not whether Claude Code is having a bad week.
It is whether this is a short-term wobble during a period of rapid iteration, or a sign that the product is struggling under the weight of rising expectations and real-world usage.
Anthropic may fix it quickly. The complaints may cool down just as fast as they spiked.
But right now, at least on X, the frustration looks real, broad, and increasingly difficult to ignore.