Co-Design Architecture to Deliver Next Generation of Performance Boost

 
Data Center, Enterprise, Exascale, ,

The latest revolution in HPC is the move to Co-Design architecture, a collaborative effort to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. This collaboration enables all active system devices to become accelerators by orchestrating a more effective mapping of communication between devices and software in the system to produce a well-balanced architecture across the various compute elements, networking, and data storage infrastructures.

Co-Design architecture exploits system efficiency and optimizes performance by ensuring that all components serve as co-processors in the data center, creating synergies between the hardware and the software, and between the different hardware elements within the data center. This is in diametric opposition to the traditional CPU-centric approach, which seeks to improve performance by on-loading ever more operations to the CPU.

Rather, Co-Design recognizes that the CPU has reached the limits of its scalability, and offers an intelligent network as the ideal co-processor to share the responsibility for handling and accelerating workloads. Since the CPU has reached its maximum performance, the rest of the network must be better utilized to enable additional performance gains.

Besides, the CPU was designed to compute, not to oversee data transfer. By reducing overhead on the CPU, the CPU is freed from non-compute functions and is allowed to focus on its original intent. By placing the algorithms that handle those other functions on an intelligent network, performance improves both on the network and in the CPU itself.

This technology transition from CPU-centric architecture to Co-Design brings with it smart elements throughout the data center, with every active component becoming more intelligent. Data is processed wherever it is located, essentially providing in-network computing, instead of waiting for the processing bottleneck in the CPU.

The only solution is to enable the network to become a co-processor. Smart devices can move the data directly from the CPU or GPU memory into the network and back, and can analyze the data in the process. This means that the new model is for completely distributed in-network computing, wherever the data is located, whether at the node level, at the switch level, or at the storage level.

The first set of algorithms being migrated to the network are data aggregation protocols, which enable sharing and collecting information from parallel processes and distributions. By offloading these algorithms from the CPU to the intelligent network, a data center can see at least 10X performance improvement, resulting in a dramatic acceleration of various HPC applications and data analytics.

In the future we anticipate seeing most data algorithms and communication frameworks (such as MPI) managed and executed by the data center interconnect, enabling us to perform analytics on the data as the data moves.

Ultimately, the goal of any data center is to experience the highest possible performance with the utmost efficiency, thereby providing the best return on investment. For many years, the best way to do this has been to maximize the frequency of the CPUs and to increase the number of cores. However, the CPU-centric approach can no longer scale to meet the massive needs of today’s data centers, and performance gains must be achieved from other sources. The Co-Design approach addresses this issue by offloading non-compute functions from the CPU onto an intelligent interconnect that is capable of not only transporting data from one endpoint to another efficiently, but is now also able to handle in-network computing, in which it can analyze and process data while it is en route.

Sound interesting? Learn more at our upcoming webinar, Smart Interconnect: The Next Key Driver of HPC Performance Gains.

 

 

 

About Gilad Shainer

Gilad Shainer has served as Mellanox's Vice President of Marketing since March 2013. Previously, he was Mellanox's Vice President of Marketing Development from March 2012 to March 2013. Gilad joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles between July 2005 and February 2012. He holds several patents in the field of high-speed networking and contributed to the PCI-SIG PCI-X and PCIe specifications. Gilad holds a MSc degree (2001, Cum Laude) and a BSc degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel.

Comments are closed.