The infrastructure solution text features a designed visual effect.

INFRASTRUCTURE SOLUTIONS

  • Unleash the potential of each token with ASUS AI infrastructure solutions

    Artificial intelligence, or AI, is having a profound impact – reshaping our world at an unprecedented rate. To stay competitive, proactive management is essential, and ASUS AI infrastructure solutions are pivotal in navigating this evolving landscape. ASUS offers a comprehensive range of AI solutions, from AI servers to integrated racks, AI POD for large-scale computing and the most important advanced software platforms, customized to handle all workloads, empowering you to stay ahead in the AI race.

    This graphic highlights ASUS's comprehensive AI server solution on a data center scenario. The foreground showcases services such as consulting, infrastructure architecture design, software platforms, server installation and validation, and after-sales support.
    This graphic highlights ASUS's comprehensive AI server solution on a data center scenario. The foreground showcases services such as consulting, infrastructure architecture design, software platforms, server installation and validation, and after-sales support.
    This graphic highlights ASUS's comprehensive AI server solution on a data center scenario. The foreground showcases services such as consulting, infrastructure architecture design, software platforms, server installation and validation, and after-sales support.
  • AI servers: Why choose ASUS?

    ASUS excels in its holistic approach, harmonizing cutting-edge hardware and software, empowering customers to accelerate their research and innovation. By bridging technological excellence with practical solutions, ASUS pioneers advancements that redefine possibilities in AI-driven industries and everyday experiences.

    • Product offering
    • Design capability
    • Software ecosystem
    • Complete after-sales service
    • ASUS
    • ASUS designs complete AI server solution from software to hardware covering Intel®, NVIDIA® and AMD® solutions, and x86 or Arm® architectures.
    • ASUS has resources on tap to respond quickly to fulfill almost any requirement with top-tier components, strong ecosystem partnerships, feature-rich designs, and superior in-house expertise.
    • ASUS engages in the expansive and intricate field of AI, leveraging internal software expertise while forging partnerships with software and cloud providers to offer holistic solutions.
    • ASUS prioritizes customer satisfaction by offering consultations and services with customized support, software services such as remote management via ASUS Control Center and the establishment of an intuitive cloud-service user portal.
    • Competitors
    • Restricted to server hardware only.
    • Limited support for bespoke solutions, particular with regard to software.
    • Lacking self-owned software-development resources.
    • Only provide product sales services
  • The full lineup of ASUS AI solutions
    From bare metal to integrated software-centric approaches
    This is a graphic showcase ai server full lineup to support different level of AI development from edge ai, generative ai, ai supercomputing with single server and full rack options
    This is a graphic showcase ai server full lineup to support different level of AI development from edge ai, generative ai, ai supercomputing with single server and full rack options
    This is a graphic showcase ai server full lineup to support different level of AI development from edge ai, generative ai, ai supercomputing with single server and full rack options
  • Choose the best AI server for your needs and budget

    Do you see AI as a potential solution for your current challenges? Looking to leverage AI to development the enterprise AI but worried about the costs and maintenance? ASUS provides comprehensive AI infrastructure solutions for diverse workloads and all your specific needs. These advanced AI servers and software solutions are purpose-built for handling intricate tasks like deep learning, machine learning, predictive AI, generative AI (Gen AI), large langue models (LLMs), AI training and inference, and AI supercomputing. Choose ASUS to efficiently process vast datasets and execute complex computations.

    • Edge Al

      As manufacturing advances into the Industry 4.0 era, sophisticated control systems are required to combine edge AI to enhance processes holistically. ASUS edge AI server solutions provide real-time processing at the device level, improving efficiency, reducing latency and enhanced security for IoT applications.

      EG500-E11-RS4-R

      Faster storage, graphics and networking capabilities

      • Powered by 5th Gen Intel® Xeon® Scalable processors to unleash up to 21%-greater general-purpose performance per watt, significantly improving AI inference and training
      • Short-depth chassis and rear access design for space-limited edge environments
      • Scalable expansion with support for three FHHL cards for GPUs, two internal SATA with short PSU in the rear**, two external SATA/NVMe in the front, and optional two E1.S SSDs
      • Redundant AC PSU for datacenter or server room environments

      ** Two internal SATA bays only for 650W/short PSU mode

      The ai server image of ASUS EG500-E11-RS4-R

      EG520-E11-RS6-R

      Elevated efficiency, optimized GPU and network capabilities

      • Powered by 5th Gen Intel® Xeon® Scalable processors to unleash up to 21%-greater general-purpose performance per watt, significantly improving AI inference and training
      • Short-depth chassis and rear access design for space-limited edge environments
      • Scalable expansion, with support for one FHHL card, two FHFL cards for GPU, two SATA/NVMe (front) and four SATA (rear).
      • Designed for reliable operation in 0–55°C** environments
      • Redundant AC PSU for data centers or server rooms

      ** Non-GPU SKUs are designed for 0–55°C environments; and 0–35°C for GPU-equipped SKUs.

      The ai server image of EG520-E11-RS6-R
    • Al Inference

      Meticulously-trained machine-learning models face the ultimate challenge of interpreting new, unseen data, a task that involves managing large-scale data and overcoming hardware limitations. ASUS AI servers, with their powerful data-transfer capabilities, efficiently run live data through trained models to make accurate predictions.

      RS720-E12

      GPU-accelerator optimization for maximum efficiency

      • Powered by dual Intel Xeon 6 processors, supporting DDR5 RDIMM up to 6400 MHz and MRDIMM up to 8000 MHz, the RS720-E12 series delivers exceptional efficiency for diverse workloads.
      • Designed to support up to three dual-slot GPUs, making it ideal for AI workloads and high-performance computing (HPC) tasks.
      • Equipped with up to 10 PCIe® 5.0 slots for extensive scalability and future-proofing.
      The ai server image of RS720Q-E12

      RS720A-E13

      Performance, efficiency and manageability with multi-tasking

      • Powered by AMD EPYC 9005 processors and supporting a maximum TDP of up to 500 watts per socket
      • Supports 24 DDR5 RDIMM up to 6000 MHz (1DPC), delivering exceptional performance and efficiency across the widest range of workloads
      • Supports up to 24 all-flash PCIe 5.0 NVMe drives and ten expansion PCIe 5.0 slots (two OCP 3.0 and eight PCIe) for higher bandwidth and system upgrades.
      • Optimized for up to three dual-slot GPU support, such as NVIDIA H100NVL, efficiently handling demanding graphical and computational tasks
      The ai server image of RS720A-E13
    • AI Fine Tuning

      Many engineers and developers strive to enhance performance and customization by fine-tuning large language models (LLMs). However, they often encounter challenges like deployment failures. To overcome these issues, robust AI server solutions are essential for ensuring seamless and efficient model deployment and operation.

      ESC NM1-E1

      2U high-performance server powered by NVIDIA Grace-Hopper Superchip with NVIDIA NVLink-C2C technology

      • 72-core NVIDIA® Grace GH200 Grace Hopper Superchip CPU with NVIDIA NVLink-C2C technology at 900 GB/s bandwidth
      • Accelerates time-to-market applications
      • Optimized mechanism efficiently removes excess heat
      • Includes NVIDIA AI Enterprise with its extensive frameworks and tools
      • Eases maintenance for maximum efficiency and minimized downtime
      • Ready for diverse applications in AI-driven data centers, HPC, data analytics and Omniverse™.
      The ai server image of ESC NM1-E1

      ESC NM2-E1

      2U NVIDIA MGX GB200 NVL2 server designed for generative AI and HPC

      • Modular Arm®-based MGX server, aimed at enterprise customers
      • Modular architecture, defined by NVIDIA, shortens development and accelerates product-launch times
      • Dual NVIDIA® Grace Blackwell GB200 NVL2 superchips (two CPUs and two GPUs)
      • CPUs and GPUs can be connected through NVLINK and C2C nodes for improved AI-computing performance
      The ai server image of ESC NM2-E1
    • AI Training

      Whether you're involved in AI research, data analysis, or deploying AI applications, ASUS AI servers, recognized for their outstanding performance and scalability to process complex neural network training, significantly accelerate training processes – fully unleashing the capabilities of AI applications.

      ESC8000-E11

      Turbocharge generative AI and LLM workloads

      • Powered by 5th Gen Intel® Xeon® Scalable processors to unleash up to 21%-greater general-purpose performance per watt, significantly improving AI inference and training
      • Up to eight dual-slot active or passive GPUs, NVIDIA® NVLink bridge, and NVIDIA Bluefield DPU support to enable performance scaling
      • Independent CPU and GPU airflow tunnels for thermal optimization and support for up to four 3000 W Titanium redundant power supplies for uninterrupted operation
      • A total of eight bays in combination of Tri-Mode NVMe/SATA/SAS drives on the front panel and 11 PCIe 5.0 slots for higher bandwidth and system upgrade
      • Optional OCP 3.0 module with PCIe 5.0 slot in rear panel for faster connectivity
      The ai server image of ESC8000-E11

      ESC8000A-E13P

      Turbocharge generative AI and LLM workloads

      • Powered by AMD EPYC 9005 processors with 192 Zen 5c cores, 12-channel, up to 6000 MHz DDR5 and support for a maximum TDP of up to 500 watts per socket
      • Fully compliant with NVIDIA MGX architecture, can deploy quickly in large scale
      • High-density 4U server for eight dual-slot high-end GPUs, supporting up to 600 watts each
      • Optimized server configuration, five PCI 5.0 slots for high-bandwidth PCIe NICs and DPU to enable performance scaling.
      • Support for flexible GPU configurations, including active GPUs with built-in fans and passive GPUs reliant on system fans, accommodates a variety of applications and design mechanisms.
      The ai server image of ESC8000A-E13P
    • Generative AI

      ASUS AI servers with eight GPUs excel in intensive AI model training, managing large datasets, and complex computations. These servers are specifically designed for AI, machine learning, and high-performance computing (HPC), ensuring top-notch performance and reliability.

      ESC I8-E11

      Dedicated deep-learning training and inference

      • Powered by the latest 5th Gen Intel® Xeon® Scalable processors
      • Accommodates eight Intel® Gaudi® 3 AI OCP Accelerator Module (OAM) mezzanine cards
      • Intergrate 24 x 200GbE RDMA NICs of industry-standard RoCE on every Gaudi®3
      • Modular design with reduced cable usage shortens assembly time and improves thermal optimization
      • High power efficiency with redundant 3000 W 80 PLUS® PSU
      The ai server image of ESC I8-E11

      ESC A8A-E12U

      Empowering AI and HPC with excellent performance

      • 7U eight-GPU server with dual AMD EPYC 9005 processors, designed for generative AI and HPC
      • Industry-leading 256 GB HBM capacity
      • Supports AMD Instinct™ MI325X accelerators up to 400 watts per socket and includes a direct GPU-to-GPU interconnect to deliver up to 6 TB/s bandwidth for efficient scaling for large AI models and HPC workloads
      • A dedicated one-GPU-to-one-NIC topology supports up to eight NICs for the highest throughput during compute-intensive workloads
      • Modular design with reduced cable usage shortens assembly time and improves thermal optimization
      The ai server image of ESC A8A-E12U

      ESC N8-E11/ESC N8-E11V

      The best choice for heavy AI workloads

      • Powered by NVIDIA® HGX™ H100/ H200 with 5th Gen Intel® Xeon® Scalable Processors
      • Direct GPU-to-GPU interconnect via NVLink delivers 900 GB/s bandwidth for efficient scaling
      • A dedicated one-GPU-to-one-NIC topology supports up to eight NICs for the highest throughput during compute-intensive workloads
      • Modular design with reduced cable usage shortens assembly time and improves thermal optimization
      • Advanced NVIDIA® technologies deliver full power of NVIDIA® GPUs, BlueField-3, NVLink, NVSwitch and networking
      The ai server image of ESC N8-E11/ESC N8-E11V
    • AI Supercomputing

      AI supercomputers are built from finely-tuned hardware with countless processors, specialized networks and vast storage. ASUS offers turnkey solutions, expertly handling all aspects of supercomputer construction, from data center setup and cabinet installation to thorough testing and onboarding. Their rigorous testing guarantees top-notch performance.

      ASUS AI POD
      NVIDIA GB200 NVL72

      Unimaginable AI. Unleashed.

      • The first Arm®-based rack-scale product from ASUS, featuring the most powerful NVIDIA® GB200 superchip and 5th Generation NVLink technology.
      • Connects 36 Grace CPUs and 72 Blackwell GPUs within a single rack to deliver up to 30X faster real-time LLM inference.
      • Scale-up ecosystem-ready
      The ai server image of ASUS AI POD, NVIDIA GB200 NVL72

    *ASUS servers and services are available globally. For further information, please reach out to your local ASUS representative.

  • Eight-week accelerated AI software solutions from ASUS

    Invested in an AI server but don't know how to streamline its management and optimize performance with user-friendly AI software? ASUS and TWSC integrate advanced AI software tools and AI Foundry Services that facilitate the development, deployment, and management of AI applications. In just as little as eight weeks*, the ASUS team can complete standard data-center software solutions, including cluster deployment, billing systems, generative AI tools, as well as the latest OS verification, security updates, service patches and more.

    Furthermore, ASUS provides crucial verification and acceptance services, including all aspects of the software stack, to ensure servers operate flawlessly in real-world environments. This stage validates compliance with all specified requirements and ensures seamless communication within each client's specific IT setup. The process begins with rigorous checks on power, network, GPU cards, voltage and temperature to ensure smooth startup and functionality. Thorough testing identifies and resolves issues before handover, guaranteeing that data centers operate reliably under full load.

    The meticulous approach followed by ASUS AI infrastructure solutions provides scalability that can adapt seamlessly to your growing AI needs, ensuring flexibility and future-proofing your infrastructure.

    This is a line graph that helps people understand the ASUS 8weeks ai software services
    This is a line graph that helps people understand the ASUS 8weeks ai software services
    This is a line graph that helps people understand the ASUS 8weeks ai software services

    *Please note that the delivery time for software solutions may vary ba sed on customized requirements and project scope of work.

    An AI server , customized for you

    Choose an ASUS AI server solution to enjoy significant time savings, reducing deployment time by up to 50% compared to manual setups – and allowing for seamless integration and enhanced performance. The expertise of specialist ASUS teams minimizes errors and downtime by preventing costly misconfigurations, ensuring that your servers run smoothly and reliably. With an intuitive interface and powerful automation features, ASUS simplifies server management, making it effortless to handle complex tasks.

    Achieve excellence with complete software offerings from ASUS

    Outshining other competitors, ASUS offers advanced computing and AI software services includes high-performance computing services, GPU virtualization management, integration with external systems for managing models, deployment of private cloud services, generative AI training, and integrated software and hardware solutions for data centers.

    • Key services*
    • Cloud service provider
    • Resource pool management technology
      GPU host and VM
      GPU container
      GPU serverless
      relocating resource for different pools
    • Operation platform
      Management of tenancy, container, scheduling and storage
      Deep learning and generative AI model management
      Usage-based billing management
      User scenario stability monitoring
    • Job scheduling and orchestration
      Kubernets/Docker
      HPC (Slurm)
    • Integrated development tools
      TensorFlow, PyTorch, Keras, and Nvidia Official Tools and more.
    • Scalability in resource management
      Supports multi-cloud computing
      Supports multi-core (ARM, ASIC)
    • Competitors
    • N/A
    • N/A
    • N/A
    • N/A
    • N/A
    • N/A
    • N/A
    • N/A

    *The above service availability may vary by country or region.

  • ASUS AI infrastructure solutions to transform every aspect of your business

    The future of artificial intelligence is swiftly reshaping business applications and consumer lifestyles. ASUS expertise lies in striking the perfect balance between hardware and software, empowering customers to expedite their research and innovation endeavors

  • Get your questions answered!
    • What is an AI server?
      Imagine a computer system specifically designed to power the future of artificial intelligence. That's what an AI server is! But these aren't your average office servers; they're the muscle behind the magic, crunching through the incredibly complex calculations that fuel AI advancements.
      What is an AI Server? | ASUS Pressroom
    • Why AI server is essential?
      The rise of generative AI, and tools like ChatGPT, has spurred a substantial increase in AI adoption, prompting businesses to reconsider the necessity of AI servers. However, AI servers remain indispensable, designed specifically to manage the intensive computational needs of AI and machine learning models. Outshining competitors, ASUS delivers not only cutting-edge AI server systems but also the customized software solutions that are essential for facilitating smooth AI-development processes.
    • AI Server vs. General Server
      • Purpose
      • Processor
      • Memory
      • Data Transfer
      • Cost and power consumption
      • Maintenance and scalability Requirements
      • AI Server
      • Designed specifically for handling complex AI algorithms in machine learning and deep learning workloads.
      • Equipped with high-performance GPUs or TPUs optimized for AI computing tasks.
      • Utilize large amounts of high-speed memory to process large-scale datasets quickly.
      • High-speed I/O capabilities support fast data read/write and streaming, along with high-speed network connections.
      • Higher cost due to their design for high-performance computing. Higher-performance hardware leads to higher power consumption.
      • May require more specialized maintenance for scalability and software optimization to maintain peak performance
      • General Server
      • Designed for general data processing and storage tasks, such as web hosting, application servers, and file storage.
      • Standard CPUs suitable for a variety of general computing tasks.
      • Memory configurations vary widely based on specific needs.
      • Standard I/O capabilities suitable for general networking and storage needs.
      • Relatively lower power consumption and lower cost, targeting the general commercial market and applications.
      • Generally have lower maintenance requirements, with abundant support and service options available in the market.
    • How to ensure delivery of complete software services in just eight weeks?
      ASUS, in collaboration with its subsidiary Taiwan Web Service Corporation (TWSC), specializes in designing customized software solutions for clients. Based on past projects, an eight-week timeline is achievable for project management – though specific delivery times will obviously vary depending on specific requirements and project scope.

      TWSC boasts extensive experience in deploying and co-maintaining large-scale AIHPC infrastructure for NVIDIA Partner Network cloud partner (NCP), with the National Center for High-performance Computing (NCHC)’s TAIWANIA-2 (#10 / Green 500, November. 2018) and FORERUNNER 1 (#92 / Green 500, November 2023) supercomputer series. Additionally, TWSC’s AFS POD solutions enables quick deployment of AI supercomputing and flexible model optimization for AI 2.0 applications, enabling users to tailor AI demand specifically to their needs.

      TWSC’s AFS POD solutions offer enterprise-grade AI infrastructure with swift rollouts and comprehensive end-to-end services, ensuring high availability and cybersecurity standards. Our solutions empower success stories across academic, research and medical institutions. Comprehensive cost-management capabilities optimize power consumption and streamline operating expenditure (OpEx), making TWSC technologies a compelling choice for organizations seeking a reliable and sustainable generative AI platform.

      About TWSC
      Taiwan Web Service Corporation (TWSC) is a subsidiary of ASUS, focusing on AI and cloud-computing solutions that offers AI-powered high-performance computing (HPC) capabilities and comprehensive cloud-platform tools and solutions – utilizing AI supercomputers to quickly execute large-scale applications. Leveraging its world-class expertise, TWSC provides professional enterprise-grade generative AI platforms and services, integrating trusted open-source large language models (LLMs) and the exclusive Formosa Foundation Model (FFM) enhanced for traditional Chinese. With easy, no-code operation, TWSC technologies support local enterprise applications and continuously promote the democratization and commercialization of AI. From the cloud to the ground, our offerings effectively help to reduce costs, time, human resource and cybersecurity considerations for enterprises, providing the most comprehensive and optimal solutions.

      • TWSC has been honored with the 2023 EE Awards Asia Featured Cloud Supplier Corporate Award
      • Learn more: https://tws.twcc.ai/en/
    • Ready to build a server for AI workloads?
      Customize your solutions by leaving us a message. Whether you need a quote, samples, design assistance or have other questions, we're here to help! https://servers.asus.com/support/contact