Claude Opus 4.7 Just Dropped… Or Did It Really?

Claude Opus 4.7 Just Dropped… Or Did It Really?

More

Descriptions:

Nate Herk investigates the release of Claude Opus 4.7 by first reconstructing what went wrong with Opus 4.6 over the preceding weeks. The central piece of evidence is an analysis of nearly 7,000 Claude Code sessions by a senior AMD director, which found that thinking depth collapsed 73% — from roughly 2,200 characters of reasoning per turn down to about 600 — and the rate at which the model skipped reading files before editing them jumped from 6% to 33.7%. Users reported needing to intervene 12 times more frequently, and hallucinated git commit hashes, fabricated package names, and premature task abandonment became common complaints. $200/month Max plan subscribers reported burning through their monthly quota in under an hour.

Herk traces the root cause to a silent February 9th configuration change: Anthropic switched Opus 4.6 to adaptive thinking (dynamically allocating zero reasoning tokens for tasks it judged simple) and dropped the default effort level to medium — without touching the underlying model weights. Confirmed by Claude Code creator Boris Churney, the turns where the model hallucinated had zero reasoning; deep-reasoning turns were correct.

Opus 4.7 addresses these issues with restored reasoning depth, a new “X-high” effort tier exclusive to the new model, a /ultra-review slash command for dedicated code review sessions, and explicit safety improvements targeting deception and sycophancy. Benchmarks show significant improvements across major coding and reasoning categories. Herk flags two migration considerations: an updated tokenizer that may increase input costs 1.0–1.3x depending on content type, and higher default effort levels that will consume more thinking tokens per session.


📺 Source: Nate Herk | AI Automation · Published April 16, 2026
🏷️ Format: News Analysis

1 Item

Channels

1 Item

Companies