Build the Most Powerful Data Center with GPU Computing Technology and High-speed Interconnect
Discuss How to Design a Well-balanced System That Maximizes Performance and Scalability
Webinar Date: June 2020
As Moore’s Law is ending, the convergence of HPC and AI is emerging with a new set of methods to complement conventional modeling, simulation, and data analytics to increase the potential to solve science’s greatest challenges. Infrastructure that used to incur years of planning and the integration of custom-designed componentry, is giving way to a new era where GPU-accelerated technology and in-network computing are enabling enterprises to deploy world-record setting supercomputers using standardized components, now systemized for deployment in months or even weeks.
NVIDIA’s Tensor Core GPUs, which sit at the core of most AI, ML and HPC applications, together with the low-latency, high-bandwidth InfiniBand network interconnect designed to minimize bottlenecks, provide the foundation for designing and building such large infrastructure.
In this webinar you will learn:
- What are the challenges when building a modern HPC and AI data center and how to tackle them?
- What are the advantages and performance boost brought by GPU computing and how to design a heterogenous infrastructure?
- How In-Network computing provides a competitive advantage in scale-out design for HPC and AI?
Speakers:
Qingchun Song
Sr. Director of APAC Market Development
NVIDIA Mellanox Network Business Unit
Dr. Gabriel Noaje
Senior Solutions Architect of SEA/ANZ
NVIDIA Corporation
Ashrut Ambastha
Principal Engineer and Solution Architect
NVIDIA Mellanox Network Business Unit
OnDemand webinar video coming soon