SuperX Launches XN9160-B300 AI Server with NVIDIA for Next-Gen Compute

John Brown

Member
AI Server.jpg

SuperX AI Technology Limited (NASDAQ: SUPX) has announced its latest flagship, the SuperX XN9160-B300 AI Server , equipped with eight NVIDIA Blackwell B300 GPUs to deliver peak performance for AI training, inference, and high-performance computing (HPC) workloads.

Designed for scalability, efficiency, and modularity, the XN9160-B300 is built to address the demands of data centers, AI factories, and scientific research environments.

Key Specifications & Architecture​

  • Hardware & Chassis: The AI server is housed in an 8U form factor and packs dual Intel Xeon 6 CPUs, 32 DDR5 memory slots, and high-speed networking.
  • GPU & Memory: It integrates NVIDIA's HGX B300 module with eight B300 GPUs. The system deploys 2,304 GB of unified HBM3E memory (288 GB per GPU), eliminating the need for memory offloading and enabling large model training and inference.
  • Interconnect & Networking: Connectivity includes 8 × 800 Gb/s InfiniBand or dual 400 Gb/s Ethernet, plus 5th-generation NVLink, ensuring ultra-low latency and high throughput for distributed workloads.
  • Performance Gains: NVIDIA's Blackwell Ultra architecture boosts compute with ~50% more NVFP4 throughput and ~50% more HBM memory compared to previous generation chips.

Use Cases & Target Markets​

SuperX positions the XN9160-B300 server for a wide range of high-demand applications, such as:

  • AI model training & inference: Particularly for foundation models, multimodal systems, and long-context models
  • Scientific & HPC work: Climate modeling, genomics, physics simulations, and large-scale research
  • Enterprise & financial analytics: Real-time risk modeling, quantitative simulations, and data-intensive workflows
  • Edge & data center transformation: Building AI “superpods” or next-gen compute clusters

Why This Matters​

  • Pushing AI infrastructure forward: This server marks a step toward hardware that can support ever-larger models with fewer bottlenecks.
  • Efficiency under load: With memory unified across GPUs and high-speed interconnects, performance and scaling become more seamless.
  • Modular & future proof: The design supports upgrades, maintainability, and integration into evolving AI data center ecosystems.
Discover IT Tech News for the latest updates on IT advancements and AI innovations.

Read related news - https://ittech-news.com/supabase-raises-100m-at-5b-valuation-co-led-by-accel-and-peak-xv/
 
Погрузись в атмосферу китайских дорам <a href=https://bryansktoday.ru/article/247115>https://bryansktoday.ru/article/247115</a> исторических саг, романтических историй и современных драм! Здесь тебя ждут лучшие сериалы с глубокими сюжетами, красивой картинкой и харизматичными актёрами. Смотри онлайн и окунись в мир восточной любви и вдохновения!
 
Top