
Blog Review: Apr. 15
Semiconductor Engineering’s April 15 blog review aggregates fresh technical commentary from leading EDA, foundry and chip companies. Highlights include Cadence’s eUSB2‑V2 delivering multi‑gigabit USB 2.0, Intel’s ultra‑thin GaN‑on‑silicon chiplet that fuses power and logic, and Siemens’ push for high‑level synthesis in AI‑chip creation. The roundup also covers AI prompt economics, ARM’s architecture chatbot, and a slew of system‑level topics such as PCIe 8.0 PHY alignment, power‑delivery complexity, and edge‑intelligence bottlenecks.

AI Growing Impact On Chip Design And EDA Tools
A panel of senior engineers from Synopsys, Intel, AMD, Nvidia, Microsoft and UC Berkeley discussed how AI is reshaping chip design and the tools that support it. They highlighted the surge in data‑center AI workloads that demand ever‑higher performance‑per‑watt, forcing EDA...

Research Bits: Apr. 14
Researchers from Hong Kong, Tsinghua and Southern University of Science and Technology unveiled CLAP, a memristor‑based platform that fuses physically unclonable function authentication with compute‑in‑memory, achieving 99.46% AUC on ECG data while shrinking area and power use. A separate team...

Startup Funding: Q1 2026
Q1 2026 saw private semiconductor startups raise over $8 billion across 80 companies, with 18 rounds exceeding $100 million and two mega‑rounds—Cerebras and Rapidus—reaching $1 billion each. AI‑centric chip designs for inference and high‑bandwidth interconnects dominated the capital, while photonics and agentic EDA...

Why Hardware Monitoring Needs Infrastructure, Not Just Sensors
Chipmakers are turning to comprehensive hardware monitoring infrastructures to handle the growing complexity of modern SoCs, which now contain billions of transistors and multiple power and clock domains. Traditional test and guard‑banding methods no longer provide sufficient visibility, prompting a...

Silent Data Corruption: A Major Reliability Challenge in Large-Scale LLM Training (TU Berlin)
Researchers at Technische Universität Berlin released a paper exposing silent data corruption (SDC) as a hidden reliability threat in large‑scale LLM training. By injecting faults into GPU matrix‑multiply instructions, they mapped how bit‑level errors propagate into loss spikes, NaNs, and...

Study of EUV Nanostructures Using AFM With High-Aspect Ratio Tip (Purdue, Intel, Bruker)
Researchers from Purdue, Intel and Bruker published a paper showing that atomic force microscopy (AFM) with high‑aspect‑ratio diamond‑like carbon tips can map 40 nm‑pitch extreme ultraviolet (EUV) photoresist patterns, but the measurements are distorted by complex tip‑sample dynamics. By applying force‑mapping...

Photonic Packaging Resistant to Extreme Environments (NIST, Johns Hopkins, U. Of Maryland)
Researchers from NIST, Johns Hopkins and the University of Maryland have unveiled a new photonic chip packaging technique that uses direct hydroxide catalysis bonding of a V‑groove fiber array to the chip. The method tolerates extreme conditions—from cryogenic 3.8 K to...

PDN Challenges In DRAM-Based Compute-In-Memory Systems (UT Austin)
Researchers at the University of Texas at Austin released a technical paper analyzing power delivery network (PDN) challenges in DRAM‑based compute‑in‑memory (PIM) systems. The study introduces a unified taxonomy that classifies PIM‑induced current behavior by temporal (burst versus sustained) and...

Chip Industry Week In Review
Intel announced three major moves: joining Elon Musk’s Terafab AI‑robotics fab targeting 1 TW of compute, expanding its multi‑year AI and cloud partnership with Google to include custom IPUs, and showcasing the world’s thinnest GaN chiplet from its foundry. Broadcom will...

Early HBM4 Validation Points The Way For Next Generation AI And HPC Systems
Memory bandwidth is becoming the primary bottleneck for AI and high‑performance computing, driving the industry toward High‑Bandwidth Memory 4 (HBM4). Synopsys announced the world’s first HBM4 IP test chip that has been validated in silicon, achieving 9.2 Gbps eye‑opening performance across...

The Coming Breakup Between AI And The Cloud
The article argues that the era of cloud‑only AI is ending as latency, privacy, and cost pressures push intelligence onto devices. Edge AI eliminates network‑induced delays, keeps data local, and reduces reliance on expensive data‑center compute. However, limited on‑device resources...

DRAM’s Whac‑A‑Mole Security Crisis
Rowhammer remains a pervasive DRAM security flaw, and a newer variant called Rowpress is emerging as a complementary threat. Memory manufacturers have introduced refresh‑management commands—RFM, ARFM and DRFM—to target vulnerable rows, yet these mitigations are imperfect and can be weaponized....

A New Era For Co-Processing
The semiconductor industry is shifting toward heterogeneous co‑processing architectures as AI workloads outpace single‑processor capabilities. CPUs remain the host, while GPUs, DSPs, NPUs and emerging RISC‑V accelerators handle specialized tasks, with data movement becoming the primary efficiency bottleneck. Vendors stress...

Rethinking Robotics Reinforcement Learning: A Practical Humanoid Training Workflow
NVIDIA’s DGX Spark workstation, powered by the Grace‑Blackwell (GB10) Superchip, runs the full Isaac Sim and Isaac Lab stack natively on Arm, eliminating cross‑compilation. By leveraging 512 parallel environments, the system achieves roughly 65,000 simulation steps per second, enabling a humanoid robot to...

Fast Isn’t Fast Enough: Redefining Metrics for Edge AI
Industry leaders at Arm, Cadence, Rambus and others argue that edge AI performance is no longer measured by peak TOPS but by real‑world latency, power draw and memory efficiency. They note that data movement and bandwidth now limit inference more...

Redefining AI Inference With New Silicon Architecture
VSORA, a fabless semiconductor firm, unveiled its Jotunn8 and Tyr AI chip families built on a reimagined data‑movement architecture that dramatically lowers cost per query for hyperscale data‑center inference and powers demanding edge use cases such as autonomous driving. The...

EDA And IP Numbers Up Again, But Numbers Are More Nuanced
Electronic design automation (EDA) and semiconductor IP revenue rose 10.3% in Q4 2025, reaching $5.466 billion versus $4.955 billion a year earlier. The CAE segment led the growth, up 9.4% to $2.083 billion, while non‑reporting IP firms—dominated by Arm—jumped 24.7% to $1.413 billion. Reporting...

Blog Review: Apr. 8
The April 8 blog roundup from Semiconductor Engineering spotlights a wave of technical breakthroughs across the semiconductor ecosystem. Cadence unveils LPDDR6 with built‑in metadata, row‑hammer mitigation and three‑rail DVFS, while Synopsys and Siemens champion multiphysics and simulation‑driven digital twins for automotive...

The Specialty Device Surge Part 2: The Process Control Challenges Of MEMS, Co-Packaged Optics, And More
The second installment of the Specialty Device Surge series highlights how MEMS, CMOS image sensors, SiC/GaN power devices, and co‑packaged optics are confronting unprecedented process‑control hurdles as wafer sizes expand to 300 mm. Each device family relies on unique materials—piezo films,...
Enhancing Silicon Reliability With In-System Test And SLM Data
The semiconductor industry is leveraging in‑system test (IST) and Silicon Lifecycle Management (SLM) data to boost chip reliability across design, manufacturing, and field operation. Traditional DFT methods such as ATPG, scan chains, and BIST remain core, but embedded monitors and...
Research Bits: Apr. 6
Researchers at Loughborough University unveiled a nanoporous niobium‑oxide memristor that performs reservoir computing directly in hardware, achieving up to 2,000‑times lower energy consumption than conventional software solutions. The same chip accurately forecasted short‑term Lorenz‑63 chaos, recognized pixelated digits and executed...

Developing A Security Framework For Chiplet-Based Systems
The article outlines a security framework for chiplet‑based systems, emphasizing that each chiplet must possess a verifiable identity tied to a platform‑wide trust chain. It describes two provisioning patterns—certificate‑based external provisioning and silicon‑derived (PUF) self‑generated keys—and explains how both feed...

Automated Multiphysics For Successful 3D-IC Design
Design teams moving to 3D‑IC architectures face intertwined power, thermal and mechanical challenges that can jeopardize yield and reliability. Traditional 2D verification tools fall short because stacked dies introduce new materials and complex inter‑dependencies. Siemens EDA’s Calibre 3DStress combined with...

AI Demand Resets Memory Market Priorities, Tightening NOR Flash Availability
The surge in AI infrastructure is creating a memory supercycle that pushes leading chipmakers to prioritize high‑margin products such as HBM, DDR5 and advanced NAND. DRAM prices are projected to jump roughly 90% quarter‑over‑quarter, while NAND could rise about 60%,...

World First: MACsec IP Receives ISO/PAS 8800 Certification For Automotive And Physical AI Security
Synopsys became the first company to earn ISO/PAS 8800 certification for its MACsec IP, a standard that secures Ethernet communication inside vehicles. The certification, validated by SGS TÜV Saar, confirms that the IP not only protects data integrity but also meets the...

Moving Electrons, Not Just Vehicles
The article examines how modern power electronics—especially multi‑level converters, silicon‑carbide (SiC) devices, and advanced power‑management ICs—are improving efficiency in electric vehicle (EV) and robot battery systems. It highlights fast‑charging challenges, noting that 15‑minute 0‑80% charges and 750 kW superchargers generate heat...

The One Bit Problem That Can Break a System
Bit flipping, once a rare reliability glitch, has become a systemic risk as semiconductor nodes shrink, clock speeds rise, and operating voltages drop, exposing aerospace, automotive and data‑center chips to silent data corruption. The phenomenon is driven by cosmic radiation,...

Embedded World 2026: Bringing Edge AI Into The Real World
At Embedded World 2026, Synaptics demonstrated that artificial intelligence is moving off the cloud and onto the device, delivering real‑time, context‑aware capabilities at the edge. The company showcased the SYN765x platform, which bundles Wi‑Fi 7, Bluetooth 6.0 and on‑chip AI compute for...

Secure at First Silicon: Reducing Cost and Risk
Side‑channel leakage often surfaces only after first silicon, forcing expensive redesigns. The Inspector Pre‑Silicon framework embeds side‑channel analysis into RTL and gate‑level verification, generating test vectors and statistical metrics to identify leakage early. By providing actionable, module‑level insights throughout the...

Causal Inference for AMS Design (U. Of Florida)
University of Florida researchers released a technical paper introducing a causal‑inference framework for analog‑mixed‑signal (AMS) circuit design. The method builds a directed‑acyclic graph from SPICE simulation data and estimates average treatment effects (ATE) to rank design parameters. Tested on three...

Integrating Error Propagation Theory Into the FMEDA Framework (Robert Bosch GmbH)
Robert Bosch GmbH released a technical paper that embeds error propagation theory into the FMEDA (Failure Modes, Effects, and Diagnostic Analysis) framework. The authors demonstrate how to calculate confidence intervals for the Single Point Fault Metric (SPFM) and Latent Fault...

In-Depth Analysis of 187 Publications on Hardware Reverse Engineering (Ruhr U., MPI)
A new Systematization of Knowledge paper from Ruhr University Bochum and the Max Planck Institute surveys 187 peer‑reviewed hardware reverse engineering (HRE) studies spanning ICs, FPGAs and netlists. The analysis reveals that only seven papers (4%) supplied reproducible artifacts, underscoring...

Systematic Analysis of CPU-Induced Slowdowns in Multi-GPU LLM Inference (Georgia Tech)
Georgia Tech researchers released a paper exposing how CPUs, not GPUs, often throttle multi‑GPU large language model (LLM) inference. Under‑provisioned CPU cores cause delayed kernel launches, stalled communication, and tokenization lag, leaving GPUs idle even when they have capacity. Adding...

Chip Industry Week In Review
Arm unveiled its first internally designed AGI CPU built on TSMC’s 3 nm process, targeting power‑efficient AI data‑center workloads. Gartner predicts inference costs for 1‑trillion‑parameter LLMs will fall more than 90 % by 2030, while Google warns quantum computers could break current...

AI Workloads Are Turning The Data Center Network Into A Combined Memory And Storage Fabric
AI inference is redefining data‑center networks, turning them into a unified memory‑and‑storage fabric. Unlike the bursty traffic of classic microservices or training workloads, inference generates sustained, high‑volume data flows to fetch KV‑cache state from remote memory and flash. This shift...
Importance Of Hardware Security Verification In Pre-Silicon Design
Hardware security verification is becoming a prerequisite for any silicon destined for cloud, automotive, industrial or edge AI applications. The discipline rests on two pillars: functional security verification, which confirms that security features behave as specified, and protection verification, which...

Memory Wall Gets Higher
SRAM scaling has stalled, causing the memory wall to rise as each new node shrink consumes a larger chip fraction without delivering proportional capacity or speed gains. The issue now affects not only cutting‑edge AI accelerators but will eventually impact...

Precision In Depth: Extraction Workflows For CFETs And Buried Power Rails
Chip designers are turning to complementary field‑effect transistors (CFETs) and buried power rails (BPRs) to extend Moore’s Law beyond the 5 nm barrier. By stacking n‑ and p‑type devices vertically and routing power beneath the active layers, these architectures double density...

Detect, Diagnose, And Debug Using Sensors And Functional Monitoring
Modern AI accelerators generate nanosecond‑scale current spikes that push on‑die power delivery networks (PDN) beyond their voltage limits, capping computational throughput. Rack power density is soaring toward 100 kW, creating transient load spikes that traditional power infrastructure cannot absorb. Siemens Tessent Embedded...

Removing The Accuracy And Time Tradeoff In EM Simulation
For years engineers balanced electromagnetic FEM accuracy against long solve times, especially as frequencies surpassed 60 GHz and meshes grew. Keysight’s Advanced Design System now embeds NVIDIA’s cuDSS sparse direct solver, running on H100 GPUs, to accelerate FEM linear solves. Benchmarks...

AI Won’t Kill Verification IP, But It Will Redefine It
Verification IP (VIP) remains essential as chip designs move to 3nm and 2nm nodes, accounting for roughly 68% of the development cycle. AI tools are poised to augment VIP by automating test generation, integration, and debug, rather than replacing the...

Beating The Heat In 3D Packages
Thermal management has become a top‑level constraint for 3D multi‑die packages as power densities exceed 1 kW. Engineers are adopting AI‑driven adaptive meshing and real‑world test wafers to bridge simulation and measurement, while system‑level technology co‑optimization (STCO) strategies have cut GPU...

Auto Ethernet 10BASE-T1s Steps Up, With Tbps On The Horizon
Automotive Ethernet, especially the 10BASE‑T1S single‑pair standard, is emerging as the primary replacement for the legacy CAN bus in modern vehicles. While 10 Mbps meets current low‑speed needs, OEMs are already planning higher‑speed links—25 Gbps, 100 Gbps, and eventually terabit per second—to support...

How SW and HW Vulnerabilities Can Complement LLM-Specific Algorithmic Attacks (UT Austin, Intel Et Al.)
A collaborative paper titled “Cascade” reveals how conventional software and hardware flaws can be weaponized alongside LLM‑specific algorithmic attacks to compromise compound AI pipelines. The authors demonstrate two proof‑of‑concept attacks: a code‑injection combined with a Rowhammer guardrail bypass that injects...

Bias- and Temperature-Dependent Noise Measurements to Investigate Carrier Transport at the Tellurium Interface (POSTECH)
Researchers at POSTECH have identified contact‑origin trap‑assisted tunneling as the dominant source of low‑frequency noise in ultrathin (5 nm) tellurium field‑effect transistors at room temperature. Temperature‑dependent 1/f noise measurements reveal that cooling to 100 K suppresses trap activation, restoring the carrier‑number‑fluctuation (CNF)...

Liquid Cooling Drives Other Localized Cooling
Liquid cooling is increasingly used to manage high‑power GPUs and AI chips, but removing traditional airflow can leave nearby components overheating. Engineers must perform whole‑board thermal analysis to identify chips that transition from warm to hot without liquid cooling. Alternative...

Advanced Packaging Limits Come Into Focus
Advanced packaging has become the primary performance variable for AI and HPC chips, with substrate, bonding, and process sequence dictating scalability. Engineers now face warpage, glass fragility, hybrid‑bond yield, and substrate limits as the dominant yield‑killers as packages grow larger...

Identifying Read Disturbance Threshold of DRAM Chips (ETH Zurich, Rutgers)
A paper titled “DiscoRD” from ETH Zurich and Rutgers introduces a rapid experimental method to determine the read‑disturbance threshold (RDT) of DDR4 DRAM chips. The authors measured hundreds of thousands of rows across 212 chips, building an empirical model of...

Analysis of the Evolving Landscape of Ultra-Low-Power Edge AI Processors (U. Of Austria, ETH Zurich)
A new arXiv paper from the University of Austria and ETH Zurich benchmarks ultra‑low‑power edge AI processors across three architectures: the RISC‑V‑based GAP9, the ARM Cortex‑M55 STM32N6, and Sony's in‑sensor IMX500. The study evaluates latency, inference efficiency, energy use, and...