Category Archives: Uncategorized

Enabling Application Performance in Data Center Environments

Ethernet switches are simple: they need to move packets around from port to port based on the attributes of each packet. There are plenty of switch vendors from which to choose. Differentiating in this saturated market is the aspiration of each vendor.

 

Mellanox Technologies switches are unique in this market. Not just “yet another switch” but a design based on a self-built switching ASIC and a variety of 1RU switches. These switches excel in performance compared to any other switch offered in the market. Being first and (still) the only vendor with a complete end-to-end 40GbE solution, Mellanox provides a complete interconnect solution and the ability to achieve the highest price-performance ratio.

Continue reading

Mellanox Named HP’s 2012 Innovation Supplier of the Year

We’re thrilled to start out 2013 with some great news: Mellanox was named HP’s 2012 Innovation Supplier of the Year at last month’s annual HP Supplier Summit in San Francisco.

Mellanox was chosen as the top supplier out of 600 worldwide contractors across all HP product lines. To determine the winner of the Innovation Supplier of the Year Award, HP evaluated an elite group of suppliers with outstanding performance exemplifying principles of delivering greater value, including enhanced revenue, cost savings and process efficiencies.

Earlier this year, Mellanox announced that its Ethernet and InfiniBand interconnect solutions are now available through HP to deliver leading application performance for the HP ProLiant Generation 8 (Gen8) servers. Specific products available include: Mellanox ConnectX®-3 PCIe 3.0 FDR 56Gb/s InfiniBand adapters and 10/40GbE NICs, and SwitchX® FDR 56Gb/s InfiniBand switch blades and systems. Mellanox offers the only interconnect option for the HP ProLiant Gen8 servers that includes PCIe 3.0-compliant adapters.

We look forward to the continued partnership with HP in 2013. And stay tuned to our blog to learn more about new and innovative partnerships between Mellanox and its customers throughout the year.

Mellanox InfiniBand and Ethernet RDMA Interconnect Solutions Accelerate IBM’s Virtualized Database Solutions

Recently, IBM expanded its PureSystems family with the new PureData System, which features analytics and the ability to handle big data in the box. For today’s organizations to be competitive, they need to quickly and easily analyze and explore big data—even when dealing with petabytes. The new system simplifies and optimizes the performance of data warehouse services and analytics applications. The new PureData for Analytics system is designed to accelerate analytics and boasts the largest library of in-database analytic functions on the market today. Clients can use it to predict and help avoid customer churn in seconds, and create targeted advertising and promotions using predictive and spatial analysis, and prevent fraud.

We are pleased to announce that our InfiniBand and Ethernet RoCE interconnect solutions have been selected to accelerate these systems, helping reduce CPU overhead, enable higher system efficiency and availability, and deliver higher return-on-investment.

Modern database applications are placing increased demands on the server and storage interconnects as they require higher performance, scalability and availability. Virtualizing IBM DB2 pureScale® on System x® servers using Mellanox’s RDMA based interconnect solutions deliver outstanding application performance and business benefits to IBM customers.

Mellanox’s interconnect solutions on IBM DB2 pureScale virtualized database on System x servers provide the ability to run multiple highly scalable databases clusters on the same shared infrastructure, while staying highly available and helping to minimize downtimes.

Mellanox interconnect products enable the IBM DB2 pureScale to deliver the performance and functionality needed to support the most demanding database and transaction processing applications. Using Mellanox high bandwidth, low latency interconnects is one of the key ingredients in building scalable cluster solutions with DB2 pureScale.

Mellanox InfiniBand and Ethernet interconnects enable IBM DB2 pureScale to provide a direct connectivity from the database virtual machines to the interconnect infrastructure while preserving RDMA semantics. This direct connectivity allows virtual machines to achieve lower latency and faster data access versus other solutions.

Live Demonstration and presentations at Information On demand 2012 (October 21st – October 26th at Las Vegas, NV)

Visit the Intel booth on the Expo floor to see a live demonstration of virtualized DB2 pureScale cluster running over Mellanox’s 10GbE with RoCE interconnect solution.

Mellanox FDR 56Gb/s InfiniBand Adapters Provide Leading Application Performance for Dell PowerEdge C8000 Series Servers

Dell just announced today the PowerEdge C8000 series, which is the industry’s only 4U shared infrastructure solution to provide customers compute, GPU/coprocessor and storage options in a single chassis. End users deploying the PowerEdge C8000 with Mellanox fast interconnect solutions gain access to the industry-leading performance of 56Gb/s InfiniBand combined with the power of Dell’s newest high end server, resulting in a high performance solution with low total cost of ownership in power efficiency, system scaling efficiency and compute density.

Mellanox FDR 56Gb/s InfiniBand solutions are already being deployed with Dell PowerEdge C8000 systems as part of the Stampede supercomputer at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. With a peak performance of more than 10 petaflops, Stampede will be the most powerful system available to researchers via the NSF’s eXtreme Science & Engineering Discovery Environment (XSEDE) program when installed in January 2013.

Mellanox fast interconnect solutions are providing the Dell PowerEdge C8000 with low latency, high bandwidth benefits for the most resource-intensive hyperscale workloads, including HPC, big data processing and hosting providers. Mellanox delivers the most effective interconnect solution for the Dell PowerEdge C8000, enabling the highest compute and storage performance at the lowest cost and power consumption.

Lawrence Livermore National Laboratory High Performance Computing Innovation Center (HPCIC)

On Thursday, June 30th, 2011 Lawrence Livermore National Laboratory (LLNL) held a ribbon cutting ceremony to inaugurate the opening of the High Performance Computing Innovation Center (HPCIC) located in Livermore, CA. The innovation center helps foster and promote HPC development and product designs through collaborations between LLNL and industry. The center offers both face to face interactions with Lawrence Livermore National Laboratory folks and remote video-based discussions.

 I had the pleasure to participate in the ceremony along with my colleagues Donald Fiegel and Gabriela Gonzalez-Do.  Local Congressional Representatives John Garamendi and Jerry McNerney, as well as other local officials, also attended the event. During this time, Mellanox was noted as an important partner in several collaborations with LLNL, including the opening of the new center.

Lawrence Livermore National Laboratory High Performance Computing Innovation Center (HPCIC)

In the collage you can see pictures from the ceremony (top left and right); Mark Seager (former LLNL, now Intel) and I (bottom right corner); and Donald Fiegel, Matt Leininger (on video conferencing) and I in the new facility (bottom left corner).  

Regards,

Gilad Shainer

Mellanox Scalable HPC Solutions with NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance and Efficiency

Mellanox announced the immediate availability of NVIDIA GPUDirect™ technology with Mellanox ConnectX®-2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today’s fastest high-performance computing clusters.  Read the entire press release here:

Supporting Resources:

Oracle CEO Sees Expansion of InfiniBand

During Oracle’s recent earnings conference call, Oracle CEO Larry Ellison noted that the Oracle Sun Exadata – with Mellanox InfiniBand – continues to gain market adoption with its stunning database and transaction performance at over 10X of that of its competitors. Ellison also spoke to Oracle’s intention to port additional middleware and applications over the InfiniBand network, and across its wide array of server and storage system products lines through its Sun acquisition further expanding the use of InfiniBand technology.

Mellanox’s technology, leveraged in Oracle-based server and storage systems, continues to expand in enterprise applications for Tier 1 customers, providing these end-users with the lowest latency performance and highest return-on-investment for their most commonly-used business applications.

ROI through efficiency and utilization

High-performance computing provides an invaluable role in research, product development and education. It helps accelerate time to market, and provides significant cost reductions in product development and tremendous flexibility. One strength in high-performance computing is the ability to achieve best sustained performance by driving the CPU performance towards its limits. Over the past decade, high-performance computing has migrated from supercomputers to commodity clusters. More than eighty percent of the world’s Top500 compute system installations in June 2009 were clusters. The driver for this move appears to be a combination of Moore’s Law (enabling higher performance computers at lower costs) and the ultimate drive for the best cost/performance and power/performance. Cluster productivity and flexibility are the most important factors for a cluster’s hardware and software configuration.

A deeper examination of the world’s Top500 systems based on commodity clusters shows two main interconnect solutions that are being used to connect the servers for creating those compute powerful systems – InfiniBand and Ethernet. If we divide the systems according to the interconnect family, we will see that the same CPUs, memory speed and other settings are common between the two groups. The only difference between the two groups, besides the interconnect, is the system efficiency, or how many of CPU cycles can be dedicated to the application work, and how many of them will be wasted. The below graph list the systems according to their interconnect setting, and their measured efficiency. 

Top500 Interconnect Efficiency
Top500 Interconnect Efficiency

As seen, systems connected with Ethernet achieves an average 50% efficiency, which means that 50% of the CPU cycles are wasted on non-application work or are idle, waiting for data to arrive.  Systems connected with InfiniBand achieve an above 80% efficiency average, which means that less than 20% of the CPU cycles are wasted. Moreover, the latest InfiniBand based systems have demonstrated up to 94% efficiency (the best Ethernet connected systems demonstrated 63% efficiency).

People might argue that the Linpack benchmark is not the best benchmark for measuring parallel application efficiency, and does not fully utilize the network. The graph results are a clear indication that even for the Linpack application, the network does make a difference, and for better parallel application, the gap will be much higher.

When choosing the system setting, with the notion of maximizing return on investment, one needs to make sure no artificial bottlenecks will be created. Multi-core platforms, parallel applications, large databases etc require fast data exchange and lots of it. Ethernet can become the system bottleneck due to latency/bandwidth and CPU overhead due to the TCP/UDP processing (TOE solutions introduce other issues, sometime more complicated, but this is a topic for another blog) and reduce the system efficiency to 50%. This means that half of the compute system is wasted, and just consumes power and cooling. Same performance capability could have been achieved with half of the servers if they were connected with InfiniBand. More data on different application performance, productivity and ROI, can be found at the HPC Advisory Council web site, under content/best practices.

While InfiniBand will demonstrate higher efficiency and productivity, there are several ways to increase Ethernet efficiency. One of them is optimizing the transport layer to provide zero copy and lower CPU overhead (not by using TOE solutions, as those introduce single points of failure in the system). This capability is known as LLE (low latency Ethernet). More on LLE will be discussed in future blogs.

Gilad Shainer, Director of Technical Marketing
gilad@mellanox.com