Descriptions:
Itamar Friedman, CEO and co-founder of Qodo, presents a data-driven examination of AI code quality at AI Engineer, synthesizing findings from three industry reports—from Qodo, Sonar, and a third research organization—covering thousands of developers, millions of pull requests, and billions of lines of code. The backdrop is a recent wave of cloud outages at companies that have publicly embraced AI-generated code, raising the question of whether speed-focused AI adoption is creating new quality risks.
The numbers tell a pointed story: 60% of developers report that at least a quarter of their code is now AI-generated or AI-shaped, and 15% say over 80% is. But this volume increase hasn’t improved defect rates—there are no fewer bugs per line of code, meaning total bugs scale with total output. PR volume is up 97%, review time is up 90%, and 42% of developers report spending significantly more time dealing with quality issues introduced by AI tools. Cursor and Copilot rules, which developers invest in to steer AI output toward standards, are followed inconsistently according to the survey results.
Friedman frames AI code quality as a four-stage maturity progression—basic autocomplete, agentic code generation, AI-powered quality workflows outside the IDE, and systematic governance—and argues that teams stuck at stage one or two are hitting a productivity ceiling. Breaking through requires structured, automated quality pipelines rather than ad-hoc AI review, particularly for organizations operating at scale with financial, regulatory, or reliability obligations.
📺 Source: AI Engineer · Published December 11, 2025
🏷️ Format: Benchmark Test







