• Unimaginable AI. Unleashed.
    • Unimaginable AI. Unleashed.
    • ASUS AI POD
      NVIDIA® GB200 NVL72
      • 36 NVIDIA Grace CPUs Superchips
      • 72 NVIDIA Blackwell Tensor Core GPUs
      • 5th Gen NVIDIA NVLink technology
      • Supports trillion-parameter LLM inference and training with NVIDIA
      • Scale-up ecosystem-ready
      • ASUS Infrastructure Deployment Manager
      • End-to-end services
  • The NVIDIA Blackwell GPU Breakthrough

    The ASUS AI POD contains 72 NVIDIA Blackwell GPUs, each packing 208 billion transistors. NVIDIA Blackwell GPUs feature two reticle-limited dies connected by a 10 terabytes per second (TB/s) chip-to-chip interconnect in a unified single GPU.

    • 30X
      LLM Inference
      VS.

      NVIDIA H100 Tensor Core GPU

    • 4X
      LLM Training
      VS.

      H100

    • 25X
      Energy Efficiency
      VS.

      H100

    • 18X
      Data Processing
      VS.

      CPU

    • LLM inference and energy efficiency: TTL = 50 milliseconds (ms) real time, FTL = 5s, 32,768 input/1,024 output, NVIDIA HGX™ H100 scaled over InfiniBand (IB) vs. GB200 NVL72, training 1.8T MOE 4096x HGX H100 scaled over IB vs. 456x GB200 NVL72 scaled over IB. Cluster size: 32,768
    • A database join and aggregation workload with Snappy / Deflate compression derived from TPC-H Q4 query. Custom query implementations for x86, H100 single GPU and single GPU from GB200 NLV72 vs. Intel Xeon 8480+
    • Projected performance subject to change
  • Go Beyond Hardware
    ASUS Software-driven Solution

    Outshining competitors, ASUS is specialized in crafting tailored data center solutions and providing end-to-end service spanning from hybrid servers to edge-computing deployments. We don't just stop at hardware – we go the extra mile by offering software solutions to enterprise. Our software-driven approach encompasses system verification and remote deployment, ensuring seamless operations to speed up AI development. 

  • Explore more AI breakthroughs in a single rack
  • Brilliantly Fast
    5th-gen NVLink technology in NVIDIA GB200 NVL72

    The NVIDIA NVLink Switch features 144 ports with a switching capacity of 14.4 TB/s, allowing nine switches to interconnect with the NVLink ports on each of the 72 NVIDIA Blackwell GPUs within a single NVLink domain.

    • NVLink connectivity in single compute try to ensure direct connect to all GPUs

    • NVLink connectivity in single rack

  • Maximize Efficiency, Minimize Heat
    Liquid-cooling Architectures
    • The flow of hot and cold water in the ASUS AI POD’s single compute tray

    • Our solutions optimize cooling efficiency from the single-cabinet ASUS AI POD, through the entire data center and finally to the cooling water tower – completing the water cycle. We offer the choice of either liquid-to-air or liquid-to-liquid cooling solutions to ensure effective heat dissipation.

  • Reduce Energy Waste, Optimize TCO
    ASUS Power Distribution Board
    Power Distribution Board 48V DC transfer to 12V DC for the NVIDIA Blackwell GPU
    • $1000

      Save up to USD$1,000 in electricity costs annually per rack for maximized investment and minimized maintenance

    • 35°C

      Benefit from temperature reduction of approximately 35°C

    • 11.3X

      Enjoy an impressive 11.3X PDB lifespan improvement