AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcasts⚡️ Ship AI Recap: Agents, Workflows, and Python — W/ Vercel CTO Malte Ubl
⚡️ Ship AI Recap: Agents, Workflows, and Python — W/ Vercel CTO Malte Ubl
AI

Latent Space

⚡️ Ship AI Recap: Agents, Workflows, and Python — W/ Vercel CTO Malte Ubl

Latent Space
•October 31, 2025•0 min
0
Latent Space•Oct 31, 2025

Key Takeaways

  • •Vercel released a Workflow Development Kit for idiomatic workflows.
  • •Vercel's AISDK v6 adds native agent abstraction.
  • •Workflows enable infinite‑duration serverless functions without extra cost.
  • •Vercel monetizes open‑source via support and ecosystem growth.
  • •Focus on low‑level, flexible APIs over rigid abstractions.

Pulse Analysis

In the latest Ship AI recap, Vercel CTO Malte Ubl outlined how the company is positioning itself at the forefront of AI engineering. Central to the announcement is the new Workflow Development Kit, a framework‑first tool that makes building, testing, and deploying complex AI workflows feel native to developers. By treating workflows as first‑class citizens—allowing infinite‑duration serverless functions, automatic retries, and human‑in‑the‑loop webhooks—Vercel removes traditional orchestration friction and keeps operational costs flat, a crucial advantage for production‑grade AI services.

The conversation also highlighted the evolution of Vercel’s AISDK, now at version 6 beta, which introduces a direct agent abstraction previously missing from the stack. This addition bridges the gap between chat‑style bots and autonomous agents, letting developers define tool‑calling loops, streaming responses, and fine‑grained control without abandoning the low‑level flexibility that Vercel champions. By keeping the API surface minimal yet extensible, Vercel ensures that emerging patterns—whether for streaming large language model outputs or integrating human approvals—can be adopted quickly without heavyweight rewrites.

Beyond technical features, Ubl emphasized Vercel’s open‑source business model: the company drives adoption through freely available libraries, community contributions, and a support‑centric revenue stream. This approach not only expands the ecosystem but also guarantees that the tools are dog‑fooded internally, aligning product development with real‑world developer needs. For enterprises evaluating AI workloads, Vercel offers auditability, reliability, and a developer experience that prioritizes simplicity and performance, making it a compelling platform for scaling next‑generation AI applications.

Episode Description

In this conversation with Malte Ubl, CTO of Vercel (http://x.com/cramforce), we explore how the company is pioneering the infrastructure for AI-powered development through their comprehensive suite of tools including workflows, AI SDK, and the newly announced agent ecosystem. Malte shares insights into Vercel's philosophy of "dogfooding" - never shipping abstractions they haven't battle-tested themselves - which led to extracting their AI SDK from v0 and building production agents that handle everything from anomaly detection to lead qualification.

The discussion dives deep into Vercel's new Workflow Development Kit, which brings durable execution patterns to serverless functions, allowing developers to write code that can pause, resume, and wait indefinitely without cost. Malte explains how this enables complex agent orchestration with human-in-the-loop approvals through simple webhook patterns, making it dramatically easier to build reliable AI applications.

We explore Vercel's strategic approach to AI agents, including their DevOps agent that automatically investigates production anomalies by querying observability data and analyzing logs - solving the recall-precision problem that plagues traditional alerting systems. Malte candidly discusses where agents excel today (meeting notes, UI changes, lead qualification) versus where they fall short, emphasizing the importance of finding the "sweet spot" by asking employees what they hate most about their jobs.

The conversation also covers Vercel's significant investment in Python support, bringing zero-config deployment to Flask and FastAPI applications, and their vision for security in an AI-coded world where developers "cannot be trusted." Malte shares his perspective on how CTOs must transform their companies for the AI era while staying true to their core competencies, and why maintaining strong IC (individual contributor) career paths is crucial as AI changes the nature of software development.

What was launched at Ship AI 2025:

AI SDK 6.0 & Agent Architecture

Agent Abstraction Philosophy: AI SDK 6 introduces an agent abstraction where you can "define once, deploy everywhere". How does this differ from existing agent frameworks like LangChain or AutoGPT? What specific pain points did you observe in production that led to this design?

Human-in-the-Loop at Scale: The tool approval system with needsApproval: true gates actions until human confirmation. How do you envision this working at scale for companies with thousands of agent executions? What's the queue management and escalation strategy?

Type Safety Across Models: AI SDK 6 promises "end-to-end type safety across models and UI". Given that different LLMs have varying capabilities and output formats, how do you maintain type guarantees when swapping between providers like OpenAI, Anthropic, or Mistral?

Workflow Development Kit (WDK)

Durability as Code: The use workflow primitive makes any TypeScript function durable with automatic retries, progress persistence, and observability. What's happening under the hood? Are you using event sourcing, checkpoint/restart, or a different pattern?

Infrastructure Provisioning: Vercel automatically detects when a function is durable and dynamically provisions infrastructure in real-time. What signals are you detecting in the code, and how do you determine the optimal infrastructure configuration (queue sizes, retry policies, timeout values)?

Vercel Agent (beta)

Code Review Validation: The Agent reviews code and proposes "validated patches". What does "validated" mean in this context? Are you running automated tests, static analysis, or something more sophisticated?

AI Investigations: Vercel Agent automatically opens AI investigations when it detects performance or error spikes using real production data. What data sources does it have access to? How does it distinguish between normal variance and actual anomalies?

Python Support (For the first time, Vercel now supports Python backends natively.)

Marketplace & Agent Ecosystem

Agent Network Effects: The Marketplace now offers agents like CodeRabbit, Corridor, Sourcery, and integrations with Autonoma, Braintrust, Browser Use. How do you ensure these third-party agents can't access sensitive customer data? What's the security model?

"An Agent on Every Desk" Program

Vercel launched a new program to help companies identify high-value use cases and build their first production AI agents. It provides consultations, reference templates, and hands-on support to go from idea to deployed agent

Show Notes

0

Comments

Want to join the conversation?

Loading comments...