ASUS Servers Democratize AI Development with Yotta in India

2024/04/24

Yotta selects ASUS’ ESC N8-E11 for faster deployment of AI models

KEY POINTS

  • Faster AI Deployment:- ESC N8-E11 is equipped with eight NVIDIA HGX H100 GPUs for HPC and AI development.
  • Premium AI power:- Fueled by dual 5th Gen Intel® Xeon® Scalable processors, supporting 350W TDP for diverse AI workloads.
  • Dedicated one-GPU-to-one-NIC topology:- Supports up to eight NICs and eight GPUs, delivering the highest throughput for compute-intensive workloads.

MUMBAI, India, April 24, 2024 — ASUS, today announced Yotta, an end-to-end digital transformation company, has selected ESC N8-E11 , an advanced NVIDIA® HGX H100 eight-GPU AI server for its Shakti Cloud platform.

Shakti Cloud is Yotta’s latest AI-HPC supercomputing cloud platform designed to speed up and reduce the cost of development and deployment of AI models for businesses.

“ASUS’ expertise in AI server solutions perfectly aligns with Yotta's capabilities in data center infrastructure. This collaboration will help foster AI development in India,” said Vinay Shetty, Regional Director, Component Business, ASUS India & South Asia. “Shakti Cloud is an excellent solution for businesses and startups who want to build an AI solution by harnessing the power of our ESC N8-E11 servers with NVIDIA H100 Tensor Core GPU.”

“With Shakti Cloud, Yotta wants to revolutionize AI development in India and also offer global enterprises access to top-of-the-line infrastructure for all their AI needs,” said Sunil Gupta, Co-Founder, MD & CEO of Yotta, “In order to do that, we needed the world’s best GPU servers – the likes of which India hadn’t seen. We’re glad that ASUS was able to meet our requirements with the ESC N8-E11 HGX H100 eight-GPU server. That combined with our Tier IV certified world class data center infrastructure gives us the right platform to further our AI vision.”



Domain experts

ASUS offers optimized server design and rack integration to meet the needs for AI/HPC workloads and delivers a no-code AI platform with a complete in-house AI software stack that helps any business to accelerate AI development on LLM pre-training, fine-tuning and inference with lower risk and faster catch-up — minimizing or even eliminating the need to start from scratch. In addition, ASUS has experience of running as a supercomputer operator in Taiwan with both operations and business support systems (OSS and BSS), working with customers to realize data-center-infrastructure strategies that optimize operating expense (OpEx).


Advanced, powerful AI server

The high-end ASUS ESC N8-E11 is an NVIDIA® HGX H100 AI server incorporating eight NVIDIA H100 Tensor Core GPUs and engineered to reduce the time for large-scale AI training models and HPC. This 7U dual-socket server powered by 5th Gen Intel Xeon® Scalable processors is specifically designed with a dedicated one-GPU-to-one-NIC topology that supports up to eight NICs, delivering the highest throughput for compute-intensive workloads. The modular design highly reduces the amount of cable usage, bringing the benefits of the reduced time on system assembling, avoiding cable routing, and lowering the risk of airflow choke — ensuring thermal optimization.

The ESC N8-E11 incorporates fourth generation NVLink and NVSwitch technology, and NVIDIA ConnectX-7 SmartNIC empowering GPUDirect® RDMA and Storage with NVIDIA Magnum IO™ and NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, to accelerate the development of AI and data science. It is designed with separated GPU and CPU sled for thermal efficiency, scalability and unprecedent performance and is ready for direct-to-chip (D2C) liquid cooling, highly reducing a data center’s overall power-usage effectiveness (PUE).