Descriptions:
OpenRouter and Andreessen Horowitz (a16z) released a landmark empirical study analyzing over 100 trillion tokens of real-world LLM interactions, and The AI Daily Brief breaks down its most significant findings. OpenRouter’s infrastructure routes requests across 300 models to 5 million end users — with the study covering developers and application builders rather than end consumers — providing one of the largest documented windows into actual AI usage patterns available.
The most striking finding: programming grew from approximately 11% to over 50% of all token consumption across 2025, cementing AI coding as the defining use case of the year. Reasoning model usage crossed 50% of tokens consumed, up from near zero when OpenAI’s o1 became widely available in late 2024. Average prompt length grew roughly 4x over the year, from around 1,500 to 6,000 tokens, reflecting increasingly complex, context-heavy tasks. Tool invocations in API requests grew from essentially zero to 15% over the same period, an early signal of agentic adoption.
Open-weight models — led by Chinese open-source releases including DeepSeek variants — grew from around 1% to as much as 30% of weekly usage during peak periods, though they plateaued in Q4 as Gemini 3, GPT-5.1, and Claude Opus 4.5 launched. Over 50% of open-source model usage involves roleplay or creative dialogue, reflecting use cases that closed-source providers restrict. The study also identifies what OpenRouter calls a “Cinderella glass slipper effect”: early adopters of new model releases form persistent cohorts that resist switching even as newer alternatives emerge.
📺 Source: The AI Daily Brief: Artificial Intelligence News · Published December 09, 2025
🏷️ Format: Deep Dive







