Descriptions:
Matthew Berman delivers a structured critique of Anthropic’s recent pricing moves, centering on the company’s quiet removal of Claude Code from the Pro subscription plan — a change that would have forced users who rely on agentic coding workflows to jump to the $100/month Max tier. While Anthropic later reversed the change, Berman argues the episode is symptomatic of a deeper dysfunction: opaque policy changes, contradictory communications, and a failure to clearly define what third-party tools like OpenClaw are permitted to do with subscriber tokens.
The video’s most analytical section covers what Berman describes as Anthropic’s self-reinforcing business flywheel: the company builds best-in-class coding models, sells access to enterprise clients, harvests coding data to train better models, and repeats. Berman contends that Anthropic CEO Dario Amodei made a calculated decision not to invest heavily enough in compute — explicitly predicting OpenAI might go bankrupt and betting against the need for massive capital expenditure. That bet, Berman argues, is now causing visible strain: throttled quotas, restrictions on heavy agentic users (estimated at roughly 7% of the user base), and a general erosion of trust among power users.
Berman also flags a specific April 3, 2026 policy update — dropped at 4 p.m. on Good Friday — that restricted Claude subscription tokens from being used with third-party harnesses. The video is a useful snapshot of the sentiment among Anthropic’s most engaged users at a pivotal moment in the company’s growth.
📺 Source: Matthew Berman · Published April 23, 2026
🏷️ Format: Opinion Editorial







