Nanotech News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Nanotech Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
NanotechNewsSoftware Allows Scientists to Simulate Nanodevices on a Supercomputer
Software Allows Scientists to Simulate Nanodevices on a Supercomputer
Nanotech

Software Allows Scientists to Simulate Nanodevices on a Supercomputer

•January 26, 2026
0
Phys.org – Nanotechnology
Phys.org – Nanotechnology•Jan 26, 2026

Why It Matters

QuaTrEx makes realistic quantum simulations of nanoscale transistors feasible, accelerating semiconductor innovation and reducing costly physical prototyping.

Key Takeaways

  • •QuaTrEx simulates 42,240‑atom nanoribbon on exascale
  • •Combines DFT, GW, NEGF for quantum transport accuracy
  • •New boundary‑condition algorithm cuts compute time dramatically
  • •Achieved >1 exa‑FLOP sustained performance, Bell Prize finalist
  • •Future plans include mixed‑precision and ML‑driven Hamiltonians

Pulse Analysis

The relentless push toward sub‑10 nm transistors forces semiconductor engineers into the quantum regime, where classical models fail. Traditional ab‑initio tools can only treat a few hundred atoms, far short of the tens of thousands required for a realistic device cross‑section. As AI workloads and edge computing demand ever‑more powerful chips, the industry needs predictive simulations that capture electron‑electron interactions, excited‑state effects, and non‑equilibrium transport. Exascale supercomputers provide the raw horsepower, but without algorithmic breakthroughs the problem remains intractable.

The newly released QuaTrEx package tackles this bottleneck by fusing density‑functional theory, the GW approximation, and non‑equilibrium Green functions into a single workflow. Two key innovations—accelerated boundary‑condition evaluation and an open‑boundary treatment of the GW W term—allow the code to scale across thousands of GPUs, delivering sustained performance above one exa‑floating‑point operation per second. In practice the team simulated a nanoribbon comprising 42,240 atoms, matching the geometry of commercially fabricated transistors. This level of fidelity, previously limited to proof‑of‑concept studies, opens a path to virtual prototyping of next‑generation chips.

Looking ahead, the authors plan to introduce mixed‑precision arithmetic and machine‑learning‑generated Hamiltonians, which could slash runtimes by an order of magnitude while preserving accuracy. Extending the framework to full logic gates or small circuits would let designers evaluate power, speed, and reliability before silicon is ever produced. For semiconductor manufacturers, such capability promises shorter development cycles, lower R&D costs, and a competitive edge in the race to sustain Moore’s law‑like scaling through quantum‑aware design.

Software allows scientists to simulate nanodevices on a supercomputer

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...