Yearly Archives: 2009

TOP500 33rd List Highlights

Started in 1993, the TOP500 lists the fastest computers used today, ranked according to Linpack benchmark results. Published twice a year, the TOP500 list provides an important tool for tracking usage trends in high-performance computing. The 33rd TOP500 List was released in Hamburg, Germany, during the ISC’09 conference.

This year’s list revealed that Mellanox InfiniBand demonstrated up to 94 percent system utilization, only 6 percent under the theoretical limit, providing users with the best return on investment for their high-performance computing server and storage infrastructure. This year’s TOP500 list reveals that InfiniBand is the only growing industry-standard interconnect solution, increasing 25 percent to 152 systems, representing more than 30 percent of the TOP500. Mellanox ConnectX® InfiniBand adapters and switch systems based on its InfiniScale® III and IV switch silicon provide the scalable, low-latency, and power-efficient interconnect for the world’s fastest supercomputer and the majority of the top 100 systems. Mellanox end-to-end 40Gb/s InfiniBand solutions deliver the leading performance and highest Top10 system efficiency in the 10th ranked Jülich cluster.

Highlights of InfiniBand usage on the June 2009 TOP500 list include:

- Mellanox InfiniBand interconnect products connect the world’s fastest supercomputer, 4 of the top 10 most prestigious positions, and 9 of the top 20 systems
- Mellanox InfiniBand provides the highest system utilization, up to 94 percent, which is 50 percent higher than the best GigE-based system
- Mellanox 40Gb/s end-to-end solutions provide the highest system utilization on the top 10, 15% higher than the average top 20 efficiency
- All InfiniBand-based clusters (152 total supercomputers) use Mellanox solutions
- InfiniBand is the most used interconnect among the top 100 supercomputers with 59 systems, nearly 12 times the number of Gigabit Ethernet-based clusters and the number of proprietary high speed cluster interconnects
- The total number of InfiniBand-connected CPU cores on the list has grown from 606,000 in June 2008 to 1,040,000 in 2009 (72 percent yearly growth)
- InfiniBand is the only growing industry-standard clustered interconnect in the TOP500 with a 25 percent growth rate compared to June 2008
- Mellanox InfiniBand interconnect products present in the TOP500 are used by a diverse list of applications, from large-scale, high-performance computing to commercial technical computing and enterprise data centers
- The entry level for the TOP500 list is 17TFlops, 91% percent higher than the 8.9TFlops necessary to be on the June 2008 list

Full analysis of the TOP500 can be found HERE.

Join Mellanox at ISC’09 and Celebrate the PetaScale Era

Join Mellanox (Booth #520) at ISC’09, Tuesday the 23rd at 6:15pm for a special champagne toast, as we unveil the industry’s most efficient family of 40Gb/s InfiniBand switches and usher in the PetaScale Era.

See why Mellanox end-to-end 40Gb/s InfiniBand connectivity products deliver the industry’s leading CPU efficiency rating on the Top500, and why the world’s leading European Petaflop Initiative depends on Mellanox to reach their performance goals.

Live Demonstrations:

Europe’s Largest 40Gb/s Network Demonstration

Mellanox 40Gb/s InfiniBand adapter and switch solutions are being used to create Europe’s largest 40Gb/s Remote Desktop over InfiniBand (RDI) demonstration on the ISC tradeshow floor. Participating in the 40Gb/s ecosystem demonstration: AMD, Avago Technologies, DataDirect Networks, Dell, Emcore, Eurotech, Finisar, HP, LSI, Luxtera, Microsoft, NEC, Scalable Graphics, ScaleMP, Sun, Supermicro, Tyco Electronics, Voltaire, and Zarlink.

Low-Latency 10 Gigabit Ethernet

Low Latency Ethernet is enabled through an efficient RDMA transport over Layer2 10GbE networks for performance-critical and low-latency applications. The demonstration showcases LLE running over Mellanox’s ConnectX EN 10 Gigabit Ethernet adapters at line rate with industry-leading latency of 3 micro seconds.

Hear from our HPC Industry Experts:

Topic: ISC press conference – “40Gb/s InfiniBand Network”
Speaker: Gilad Shainer, Director of HPC Marketing
Tuesday, June 23, 2009, 9:00AM – 10:00AM

Topic: “The Jülich Research on Petaflops Architectures”
Speaker: Gilad Shainer, Director of HPC Marketing
Tuesday, June 23, 2009, 1:30PM-2:10PM

Topic: “From PetaScale to Clouds – Providing the Ultimate Networking Solution for High Performance”
Speaker: Michael Kagan, CTO
Wednesday, June 24, 2009, 12:00PM-12:30PM

Julich Breakfast Presentation
Michael Kagan, CTO, and Gilad Shainer, Director of HPC Marketing
Thursday, June 25, 7:30AM-9:00AM

HOT SEAT SESSION: “From PetaScale to Clouds – Providing the Ultimate Networking Solution for High Performance”
Speaker: Michael Kagan, CTO
Thursday, June 25, 3:30PM-3:45PM

Topic: “Maintaining High Performance in a Cloud”
Speaker: Gilad Shainer, Director of HPC Marketing
Friday, June 26, 2009, 9:40AM-10:20AM

Visit Mellanox at SIFMA and See How You Can Accelerate Transaction Performance 10X for Financial Applications

Join us at the SIFMA Technology Management Conference, June 23 – 25th in New York, Booth 1619, and see how you can accelerate transaction performance 10X for financial applications.

Mellanox ConnectX Virtual Protocol Interconnect Adapters with 40Gb/s InfiniBand and low-latency 10 Gigabit Ethernet with FCoE offload integrate with financial market data applications from providers like NYSE Technologies and Reuters to significantly enhance the speed and predictability of market data delivery, and ensure the highest ROI. 

We will also be hosting a software demonstration from RNA networks. The RNAcache and RNAmessenger software, based on RNA’s Memory Virtualization Platform, is making memory a shared network resource and transparently making trade execution and/or analytics dramatically faster. With RNA and Mellanox, trade groups are seeing 5X faster trade execution performance including certified messaging without jitter, and 17X faster tick data analytics, all without changes to trading applications.

Visit us and see why 3 out of the top 5 exchanges and 7 out of top 10 banks worldwide depend on Mellanox.

Inauguration of 1st European Petaflop Computer in Jülich, Germany

On Tuesday, May 26, the Research Center Jülich reached a significant milestone of German and European supercomputing with the inauguration of two new supercomputers: the supercomputer JUROPA and the fusion machine HPC FF. The symbolic start of the systems were triggered by the German Federal Minister for Education and Research, Prof. Dr. Annette Schavan, the Prime Minister of North Rhine-Westphalia, Dr. Jürgen Rüttgers, and Prof. Dr. Achim Bachem, Chairman of the Board of Directors at Research Center Jülich as well as high-ranking international guests from academia, industry and politics.

JUROPA (which stands for Juelich Research on Petaflop Architectures) will be used Pan-European-wide by more than 200 research groups to run their data-intensive applications. JUROPA is based on a cluster configuration of Sun Blade servers, Intel Nehalem processors, Mellanox 40Gb/s InfiniBand and Cluster Operation Software ParaStation from ParTec Cluster Competence Center GmbH. The system was jointly developed by experts of the Jülich Supercomputing Center and implemented with partner companies Bull, Sun, Intel, Mellanox and ParTec. It consists of 2,208 compute nodes with a total computing power of 207 Teraflops and was sponsored by the Helmholtz Community. Prof. Dr. Dr. Thomas Lippert, Head of Jülich Supercomputing Center, explains the HPC Installation in Jülich in the video below.

HPC-FF (High Performance Computing – for Fusion), drawn up by the team headed by Dr. Thomas Lippert, director of the Jülich Supercomputing Centre, was optimized and implemented together with the partner companies Bull, SUN, Intel, Mellanox and ParTec. This new best-of-breed system, one of Europe’s most powerful, will support advanced research in many areas such as health, information, environment, and energy. It consists of 1,080 computing nodes each equipped with two Nehalem EP Quad Core processors from Intel. Their total computing power of 101 teraflop/s corresponds, at the present moment, to 30th place in the list of the world’s fastest supercomputers. The combined cluster will achieve 300 teraflops/s computing power and will be included in the rating of the Top500 list, published this month at ISC’09 in Hamburg, Germany.

40Gb/s InfiniBand from Mellanox is used as the system interconnect. The administrative infrastructure is based on NovaScale R422-E2 servers from French supercomputer manufacturer Bull, who supplied the compute hardware and the SUN ZFS/Lustre Filesystem. The cluster operating system “ParaStation V5″ is supplied by Munich software company ParTec. HPC-FF is being funded by the European Commission (EURATOM), the member institutes of EFDA, and Forschungszentrum Jülich.

Complete System facts: 3288 compute nodes ; 79 TB main memory; 26304 cores; 308 Teraflops peak performance

Missed Mellanox at Interop?

Just in case you missed us at Interop 2009, below are just a few of the presentations that took place in our booth.

Mellanox 10 Gigabit Ethernet and 40Gb/s InfiniBand adapters, switches and gateways are key to making your data center F.U.E.L. Efficient

 

Mellanox Product Manager, Satish Kikkeri, provides additional details on Low-Latency Ethernet

 

Mellanox Product Manager, TA Ramanujam, provides insight on how data centers can achieve true unified I/O today

 

Fusion-io’s CTO, David Flynn, presents “Moving Storage to Microsecond Time-Scales”

 

We look forward to seeing you at our next event or tradeshow.

Brian Sparks
brian@mellanox.com

Mellanox Key to Fusion-io’s Demo at Interop

I’m still pondering my take on Interop this year. It’s been a while since I’ve seen so many abandoned spaces on the show floor. Mind you most were 10×10 or 10×20 spots, but you could tell there were others who really went light on their presence. I saw one booth which had a 40×40 booth and just filled it with banner stands. Yikes! So nothing was really grabbing at me until I went to Fusion-io’s booth and saw the wall of monitors with a 1,000 videos playing on it at once. Fusion-io Booth

FINALLY SOMETHING IMPRESSIVE!

Even more amazing, the videos were all being driven by a single PCIe card which had 1.2TB of SSD RAM on it. This one “ioSAN” card from Fusion-io completely saturated 16 servers (126 cpu cores)…and they were able to achieve this through the bandwidth performance and ultra low-latency of 20Gb/s InfiniBand via Mellanox’s ConnectX adapters. In fact, they told me the 20Gb/s InfiniBand connection would allow them to saturate even more servers, yet they only brought 16.

iodrive_duo_flat-24

The video below, featuring Fusion-io’s CTO David Flynn, tells the complete story:

The ioSAN can be used as networked, server-attached storage or integrated into networked storage infrastructure, making fundamental changes to the enterprise storage area. This is a great example of how Mellanox InfiniBand is the enabling technology for next generation storage.

Talk with you again soon,

Brian Sparks
brian@mellanox.com

The Automotive Makers Require Better Compute Simulations Capabilities

This week I presented in the LS-DYNA user conference. LS-DYNA is one of the most used applications for automotive related computer simulations – simulations that are being used throughout the vehicle design process and decreases the need to build expensive physical prototypes. Computer simulation usage has decreased the vehicle design cycle from years to month, and is responsible for cost reduction throughout the process. Almost every part in the vehicle is designed with computer aided simulations. From crash/safety simulation to engine and gasoline flow, from air condition to water pumps, almost every part of the vehicle is simulated.

Today challenges in vehicle simulations are around the motivation to build more economical and ecological designs, how to do design lighter vehicles (less material to be used) while meeting the increased safety regulation demands. For example, national and international standardizations have been put in place, which provide structural crashworthiness requirements for railway vehicle bodies.

In order to be able to meet all of those requirements and demands, higher compute simulation capability is required. It is not a surprise that LS-DYNA is being mostly used in high-performance clustering environments as they provide the needed flexibility, scalability and efficiency for such simulations. Increasing high-performance clustering productivity and the capability to handle more complex simulations is the most important factor for the automotive makers today. It requires using balanced clustering design (hardware – CPU, memory, interconnect, GPU; and software), enhanced messaging techniques and the knowledge on how to increase the productivity from a given design.

For LS-DYNA, InfiniBand interconnect-based solutions have been proven to provide the highest productivity compared to Ethernet (GigE, 10GigE, iWARP). With InfiniBand, LS-DYNA demonstrated high parallelism and scalability, which enabled it to take full advantage of multi-core high-performance computing clusters. In the case of Ethernet, the better choice between GigE, 10GigE and iWARP is 10GigE. While iWARP aim to provide better performance, typical high-performance applications are using send-receive semantics which iWARP does not provide any added value with, and even worse, it just increase the complexity and the CPU overhead/power consumption.

If you want to get a copy of a paper that present the capabilities to increase simulations productivity while decrease power consumption, don’t hesitate to send me a note (hpc@mellanox.com).

Gilad Shainer
shainer@mellanox.com

We will be at Interop!

Come visit us at the Mellanox booth (#2307) at Interop in Las Vegas, Nevada from May 19-21, 2009. We will be demonstrating our latest additions to our 10 Gigabit Ethernet product line, including our recently announced dual-port ConnectX ENt 10GBASE-T adapter and PhyX, our 6-port, high-density, multi-protocol Physical layer silicon device designed for 10GigE switches and pass-through modules. We will also be showing these products at the Ethernet Alliance booth (#527).

Other demos in our booth include our latest BridgeX gateway where we will show I/O consolidation over FCoE, and a Native InfiniBand SSD Storage showcasing Mellanox 40Gb/s InfiniBand ConnectX adapters with a Fusion-io SAN.

We have a great line-up of presenters in our booth that will provide you with a great array of knowledge. For example, David Flynn, CTO of Fusion-i-o will deliver a presentation on “Moving storage networking into the microsecond timescale – The fusion of solid state storage and high performance networking.” Bruce Fingles, VP of Product Management at Xsigo, will present “Next Generation Data Center Connectivity: How Virtual I/O Cuts Costs by 50% and Simplifies Management.” Arista will also be presenting on their latest line of 10GBASE-T switches, and of course, the Mellanox staff have a few presentations up their sleeve. Did I mention there will be prizes? All presentations start on the half-hour, and will repeat throughout the day to ensure that we can fit into your busy and hectic schedule at the show.

We look forward to seeing you there!

Brian Sparks

brian@mellanox.com

Web Retailer Uses InfiniBand to Improve Response Time to Its Customers

Recently while talking with an IT operations manager for a major Web retailer, I was enlightened on the importance of reducing latency in web-based applications. He explained that they were challenged to find a way to reduce the response time to their web customers. They investigated this for quite some time before discovering that the major issue seemed to be the time it takes to initiate a TCP transaction between their app servers and database servers. Subsequently their search focused on finding the best interconnect fabric to minimize this time.

Well, they found it in InfiniBand. With its 1 microsecond latency between servers, this web retailer saw tremendous opportunity to improve response time to its customers. In their subsequent proof of concept testing, they found that indeed they could reduce latency between their app servers and database servers. Resulting improvement to their customers is over 30%. This is a huge advantage in their highly competitive market. I would tell you who they are but they would probably shoot me.

More and more enterprise data centers are finding that low latency, high-performance interconnects, like InfiniBand, can improve their customer-facing systems and their resulting web business.

If you want to hear more, or try it for yourself, send me an email.

Thanks,

Wayne Augsburger
Vice President of Business Development
wayne@mellanox.com