Jellyfish AI Development Study: The Real Sting Has yet to Land

Jellyfish AI Development Study: The Real Sting Has yet to Land

The New Stack
The New StackMar 19, 2026

Why It Matters

The findings prove AI tools deliver measurable productivity gains, making non‑adoption a strategic disadvantage while also exposing new quality‑control challenges.

Key Takeaways

  • 64% of firms generate most code with AI assistance
  • High adopters double pull‑request throughput versus low adopters
  • Fully autonomous code agents still rare but rising exponentially
  • AI adoption gap widens: top 10% up seven‑fold, bottom stagnant
  • Increased AI output strains code review and open‑source maintainers

Pulse Analysis

The Jellyfish AI Engineering Trends report, built on data from more than 700 companies, 200,000 engineers and 20 million pull requests, provides the most granular benchmark of AI‑assisted development to date. Over half of the surveyed organizations now rely on AI coding assistants daily, and 64 % claim that a majority of their code is written with AI help. Teams that have integrated these tools deeply are seeing their pull‑request throughput double compared with low‑adoption peers, confirming that the technology is moving from novelty to a productivity engine.

While IDE‑based copilots such as GitHub Copilot and Cursor dominate current usage, the report flags a rapid rise in fully autonomous code agents that generate pull requests without human initiation. Although still a small fraction of overall activity, the growth curve is exponential, suggesting a future where AI not only accelerates coding but reshapes the entire software lifecycle. This shift brings new friction: AI‑generated changes often pass superficial checks yet hide subtle bugs, inflating review time and exposing security gaps. Open‑source maintainers are already reporting a surge in low‑quality submissions, underscoring the need for robust guardrails.

For enterprises, the data translates into a clear strategic imperative. Companies that lag in AI adoption risk a competitive disadvantage, while those that scale responsibly can leverage custom dashboards—like Jellyfish’s partnership with Augment Code—to correlate AI usage with delivery metrics and risk indicators. Investing in verification pipelines, LLM‑based code‑detection, and continuous monitoring will turn raw productivity gains into sustainable advantage. As the industry converges on AI‑driven development, the firms that balance speed with rigorous quality controls will pull ahead of the pack.

Jellyfish AI development study: The real sting has yet to land

Comments

Want to join the conversation?

Loading comments...