AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Tuesday recap

NewsDealsSocialBlogsVideosPodcasts
HomeTechnologyAIPodcastsEven the Chip Makers Are Making LLMs
Even the Chip Makers Are Making LLMs
CTO PulseAIHardware

Stack Overflow Podcast

Even the Chip Makers Are Making LLMs

Stack Overflow Podcast
•March 10, 2026•26 min
0
Stack Overflow Podcast•Mar 10, 2026

Why It Matters

Understanding NVIDIA’s co‑design approach reveals how hardware advances are driving the next wave of AI capabilities, making large models faster and cheaper to run. For developers and engineers, these insights show where future performance gains will come from and why adopting new precision formats and serving architectures will be crucial for building next‑generation AI applications.

Key Takeaways

  • •NVIDIA co-designs GPUs and LLMs for optimal performance.
  • •FP8 precision halves memory, retains accuracy versus quantization.
  • •NemoTron family combines Mamba state-space and transformer models.
  • •Disaggregated serving (Dynamo, Nixle) improves large model inference.
  • •Open-source release includes weights, data, and full training recipes.

Pulse Analysis

NVIDIA’s move into large‑language‑model development is not a marketing sidestep; it stems from a decades‑long hardware‑software co‑design philosophy that began with CUDA and high‑performance computing. By embedding AI workloads into the GPU roadmap, NVIDIA can identify the most demanding tasks—such as computational fluid dynamics, speech synthesis, and finally LLM training—and shape silicon to accelerate them. This tight feedback loop lets the architecture team anticipate future compute, networking, and storage needs, ensuring that each new GPU generation is purpose‑built for the evolving AI landscape. The result is a full‑stack offering that blurs the line between chip maker and model creator.

One of the most tangible outcomes of this co‑design is NVIDIA’s push toward lower‑precision arithmetic. Training and inference in FP8, as demonstrated on the Hopper and Blackwell GPUs, cuts model memory by roughly half while preserving accuracy that traditional post‑training quantization would lose. Coupled with the NemoTron family—Nano, Super, and Ultra—these models fuse Mamba state‑space layers with classic transformers, delivering superior token efficiency and enabling million‑token context lengths. Hardware innovations like the Context Memory Engine further streamline how massive contexts traverse the memory hierarchy, reducing latency and expanding the practical scale of agentic AI systems.

Beyond silicon, NVIDIA is democratizing its advances through an open‑source ecosystem. Full model weights, training data, and libraries are released publicly, allowing researchers and enterprises to replicate and extend the recipes. Frameworks such as Dynamo and Nixle provide disaggregated serving, letting different GPU clusters handle pre‑fill and decode stages for massive models, while specialized storage partners embed inference results directly into memory solutions. This open‑model strategy accelerates community innovation, fuels specialized agent development, and positions NVIDIA as both a hardware leader and a catalyst for the next generation of AI applications.

Episode Description

Ryan welcomes Kari Briski, NVIDIA’s VP of Generative AI Software for Enterprise, to the show to explore how a chip manufacturer got into the model development game. They discuss NVIDIA’s co-design feedback loop between model builders and hardware architects, share insights on precision model training and memory management systems, and take a look at the roadmap and development of NVIDIA’s fully open-source Nemotron. 

Episode notes: 

Nemotron is a family of open models with open weights, training data, and recipes for building specialized AI agents.You can learn more on their Hugging Face page or at NVIDIA GTC on March 16-19. 

Connect with Kari on LinkedIn.

Congrats to user The4thIceman for winning a Populist badge on their answer to How to Center Text in Pygame.

TRANSCRIPT

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Show Notes

0

Comments

Want to join the conversation?

Loading comments...