
Ravi Subramanian on Trends that Are Shaping AI at Synopsys
Key Takeaways
- •Convergence merges silicon and systems engineering disciplines
- •AI efficiency metrics shift to tokens per dollar/watt
- •Compute, interconnect, storage, power drive AI hardware performance
- •Semiconductor supply chain entering decade of major reconstruction
- •Engineers need cross‑disciplinary expertise for physical AI
Summary
Ravi Subramanian, Synopsys' Chief Product Management Officer, explained how AI is driving the convergence of silicon design and systems engineering, a shift highlighted at the Synopsys Converge event. He noted the industry’s move from throughput‑focused metrics to efficiency‑centric measures such as tokens per dollar and tokens per watt. Subramanian also outlined the four pillars of AI hardware—compute, interconnect, storage and power—and warned of a decade‑long restructuring of the semiconductor supply chain. The interview stresses the need for engineers to master both hardware and system‑level disciplines as AI expands into physical products.
Pulse Analysis
The Synopsys Converge event underscores a structural shift in technology development: silicon designers and systems engineers are no longer operating in silos. As autonomous vehicles, robotics and smart devices demand tightly coupled hardware and software, chip architectures must be co‑designed with system‑level constraints such as latency, power and form factor. This convergence accelerates time‑to‑market for AI‑enabled products and forces companies to embed system thinking early in the silicon design flow, blurring the traditional boundary between chip and product engineering. This integrated approach also reduces validation cycles and improves overall system reliability.
Ravi Subramanian highlights a decisive move from raw throughput to efficiency‑centric metrics. Tokens per dollar and tokens per watt now dominate performance discussions because AI workloads consume massive energy and operational budgets. Data‑center operators are re‑evaluating hardware choices, favoring accelerators that deliver higher work per joule and lower total cost of ownership. This metric shift drives semiconductor firms to innovate low‑power architectures, advanced cooling solutions and smarter workload scheduling, aligning product roadmaps with the economic realities of scaling AI services across enterprises. Consequently, investors are rewarding firms that demonstrate measurable energy savings in AI deployments.
The broader AI boom is prompting the first decade of a semiconductor supply‑chain overhaul. Compute, interconnect, storage and power are being re‑engineered to meet the bandwidth and energy demands of massive models, while memory shortages threaten to bottleneck data‑center expansion. Companies are investing in heterogeneous integration, chip‑let ecosystems and on‑site power‑management to mitigate these risks. At the same time, the projected doubling of global GDP to $250 trillion hinges on productivity gains delivered by AI, making cross‑disciplinary expertise essential for engineers who must navigate hardware, software and real‑world physics simultaneously. Such strategic realignment positions the semiconductor sector as the backbone of the next wave of digital transformation.
Comments
Want to join the conversation?