![⚡ [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent — Jed Borovik, Jules](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://assets.flightcast.com/static/01K4D8FDXRRRRZG0EBGNDF3SD3.jpg)
Latent Space
Jed Borovik’s story begins with the release of Stable Diffusion, which he describes as his “first Gen‑AI moment” and the catalyst for moving from search freshness work to building AI‑powered coding tools. After nine years at Google, he joined Google Labs, an organization tasked with creating products that sit outside Google’s core offerings. Labs operates as a true AI product org, tightly coupled with DeepMind for model development while also leveraging Google’s massive internal data and infrastructure. This unique position lets the team take ideas from raw pixels through training to a finished developer experience.
The flagship of that effort is Jules, an autonomous coding agent built on the Gemini model family. Unlike typical plug‑in assistants, Jules provisions its own compute environment, allowing it to run for hours or days and perform complex, multi‑step refactorings without constant user supervision. The agent is exposed through a REST API, a dedicated CLI, and now integrates with the Gemini CLI, making it reachable from any developer workflow. Real‑world usage includes triggering Jules from GitHub Actions to automatically generate and merge pull requests, demonstrating how an ambient AI can become a seamless part of the software delivery pipeline.
Early versions of the system relied on heavy scaffolding and traditional RAG pipelines that stitched together embeddings, chunking, and external tools. As Gemini models improved, the team discovered that “less is more”: simpler prompts and fewer sub‑agents achieved higher reliability, and alternative retrieval methods like semantic‑aware grep replaced brittle embedding chunks. The transition from a preview in the Trusted Testers program to a production‑grade product was marked by strong reception at events such as the AIE Code Summit, where AI engineers gather to exchange ideas beyond academic conferences. The summit underscores the growing need for industry‑neutral venues that accelerate AI engineering collaboration.
Jed Borovik, Product Lead at Google Labs, joins Latent Space to unpack how Google is building the future of AI-powered software development with Jules. From his journey discovering GenAI through Stable Diffusion to leading one of the most ambitious coding agent projects in tech, Borovik shares behind-the-scenes insights into how Google Labs operates at the intersection of DeepMind's model development and product innovation.
We explore Jules' approach to autonomous coding agents and why they run on their own infrastructure, how Google simplified their agent scaffolding as models improved, and why embeddings-based RAG is giving way to attention-based search. Borovik reveals how developers are using Jules for hours or even days at a time, the challenges of managing context windows that push 2 million tokens, and why coding agents represent both the most important AI application and the clearest path to AGI.
This conversation reveals Google's positioning in the coding agent race, the evolution from internal tools to public products, and what founders, developers, and AI engineers should understand about building for a future where AI becomes the new brush for software engineering.
Chapters
00:00:00 Introduction and GitHub Universe Recap
00:00:57 New York Tech Scene and East Coast Hackathons
00:02:19 From Google Search to AI Coding: Jed's Journey
00:04:19 Google Labs Mission and DeepMind Collaboration
00:06:41 Jules: Autonomous Coding Agents Explained
00:09:39 The Evolution of Agent Scaffolding and Model Quality
00:11:30 RAG vs Attention: The Shift in Code Understanding
00:13:49 Jules' Journey from Preview to Production
00:15:05 AI Engineer Summit: Community Building and Networking
00:25:06 Context Management in Long-Running Agents
00:29:02 The Future of Software Engineering with AI
00:36:26 Beyond Vibe Coding: Spec Development and Verification
00:40:20 Multimodal Input and Computer Use for Coding Agents
Comments
Want to join the conversation?
Loading comments...