Generative AI is writing a bigger slice of the world’s codebase, but the productivity bump is not hitting everyone equally. New research finds the gains concentrate among experienced engineers, even as early-career developers adopt AI assistants more often.
What the data shows about AI-driven developer output
A study from the Complexity Science Hub analyzed software activity across six countries and estimates that AI-generated code climbed from about 5% in 2022 to nearly 30% by late 2024. The authors, led by Simone Daniotti, link that surge to a measurable uplift in output—roughly a 4% productivity increase on average.

The pattern is uneven. Less-experienced programmers reportedly use AI tools at a higher rate—around 37%—yet the productivity and exploration benefits were observed almost entirely among senior developers. U.S. companies spend more than $600 billion a year on programming labor, the study notes, so even small efficiency gains translate to meaningful dollars.
Beyond volume, the study found AI users more likely to assemble novel combinations of libraries, suggesting the tools help developers branch into new technical terrain faster. That inventive push, however, is overwhelmingly realized by seasoned engineers.
Why veteran developers see bigger gains from AI tools
Experience changes how AI is used. Senior engineers tend to approach assistants as accelerators for tasks they already understand well—scaffolding test suites, generating boilerplate, or translating between languages—while retaining tight control over architecture, security, and edge cases.
Crucially, they are faster at spotting subtle defects and reasoning about trade-offs in AI-suggested code. That oversight converts raw token prediction into sound engineering decisions. The CSH team’s finding that experienced developers adopt unfamiliar libraries more successfully fits this pattern: domain knowledge provides the map, and AI supplies the speed.
External data echoes the dynamic. Controlled evaluations have shown developers complete specific tasks faster with code assistants, but the biggest payoffs arrive when the human user can specify clear requirements, critique output, and iterate—skills that correlate with seniority.
Where junior developers fall behind with AI use
Early-career developers often lean on AI to fill knowledge gaps, but that reliance can obscure misunderstandings. When suggestions compile yet embed faulty logic, the debugging burden increases, and the learning loop short-circuits. Without strong mental models, it’s harder to judge when the model is confidently wrong.

Leaders warn that structure matters as much as speed. Executives at Planview argue that layering AI onto disciplined planning and risk management unlocks value across portfolios, not just in isolated tasks. BairesDev’s survey of 1,000+ developers found 76% feel AI makes their work more fulfilling by shifting focus to innovation—provided routine work is automated with guardrails.
The right mindset helps, too. Founders and CTOs increasingly advise treating the model like a junior teammate: fast and useful, but in need of review. Juniors who adopt that stance themselves—prompt deliberately, verify rigorously, and document decisions—tend to progress faster.
Implications for engineering teams and AI tooling strategies
For organizations, the lesson is not “do the same with fewer engineers,” but “ship more with the same team.” Leaders at DataRobot frame AI as a force multiplier that raises feature throughput. That shift demands process changes: standard prompts for common tasks, policy-compliant code generation, and automated checks for licensing, security, and test coverage.
AI can also reduce coordination drag. Planview leaders point to automated status rollups, risk surfacing, and dependency mapping across value streams—tasks that once consumed project managers’ time. Those gains free senior engineers to apply judgment where it matters most.
What effective use of AI in software teams looks like
High-performing teams define where AI sits in the workflow. Common patterns include:
- Using models to draft tests before implementation
- Generating data access layers from schemas
- Refactoring for readability with enforced style rules
- Producing first-pass documentation that engineers tighten
Context is king. Supplying the model with concise architectural notes, domain constraints, and representative examples yields far better output than generic prompts. Pair that with mandatory reviews, security scanning, and benchmarked metrics—lead time, defect rates, and test coverage—to confirm the uplift is real, not just perceived speed.
The picture that emerges is clear: generative AI changes both the pace and the shape of software work. It amplifies the judgment of experienced developers and can accelerate learning for juniors, but only inside a system that prizes verification, accountability, and continuous improvement. The productivity promise is there—unevenly distributed—and the differentiator is not the tool itself, but how expertly it’s used.
