Hardware Blogs and Articles
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Hardware Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
HardwareBlogsPawsey Expands High-Memory Resources on Setonix for Data-Intensive Workloads
Pawsey Expands High-Memory Resources on Setonix for Data-Intensive Workloads
Hardware

Pawsey Expands High-Memory Resources on Setonix for Data-Intensive Workloads

•February 26, 2026
0
HPCwire
HPCwire•Feb 26, 2026

Why It Matters

The upgrade removes a critical memory bottleneck, enabling more ambitious, data‑intensive simulations and reducing job failures, which accelerates scientific discovery across multiple high‑impact domains.

Key Takeaways

  • •High-memory nodes doubled to 16 on Setonix.
  • •Each node has 1 TB RAM, dual 64‑core EPYC CPUs.
  • •Expansion driven by user survey demand across scientific domains.
  • •Usage caps: two jobs per researcher, four per project.
  • •NCRIS grant funded upgrade, boosting national research capacity.

Pulse Analysis

The surge in data‑intensive research—from genomics to climate modeling—has outpaced traditional HPC memory configurations. Single‑node workloads that require terabytes of contiguous memory are becoming routine, yet many supercomputers still rely on fragmented memory pools that limit performance. By provisioning 1 TB of shared RAM per node, Pawsey addresses this gap, allowing researchers to load massive datasets directly into memory, cut I/O overhead, and achieve faster time‑to‑solution for complex simulations and AI training tasks.

Pawsey’s decision to double high‑memory capacity stems from direct feedback collected through its annual user survey, a practice that aligns infrastructure investment with real‑world scientific needs. Disciplines such as life sciences and materials engineering, which increasingly rely on single‑node, memory‑heavy codes, will see reduced queue times and fewer job aborts. The capped usage model balances equitable access with resource efficiency, ensuring that both individual investigators and collaborative projects can leverage the expanded nodes without overwhelming the system.

The NCRIS grant that financed the upgrade highlights the strategic role of government funding in maintaining Australia’s competitive edge in high‑performance computing. As global research institutions race to provide exascale‑class capabilities, Pawsey’s focus on high‑memory nodes positions it as a niche leader for workloads that traditional GPU‑centric clusters struggle to serve. Looking ahead, the planned increase to four concurrent jobs per project will further improve throughput, making Setonix a more attractive platform for multi‑disciplinary teams seeking scalable, memory‑rich compute power.

Pawsey Expands High-Memory Resources on Setonix for Data-Intensive Workloads

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...