Monthly Archives: May 2009

Mellanox Key to Fusion-io’s Demo at Interop

I’m still pondering my take on Interop this year. It’s been a while since I’ve seen so many abandoned spaces on the show floor. Mind you most were 10×10 or 10×20 spots, but you could tell there were others who really went light on their presence. I saw one booth which had a 40×40 booth and just filled it with banner stands. Yikes! So nothing was really grabbing at me until I went to Fusion-io’s booth and saw the wall of monitors with a 1,000 videos playing on it at once. Fusion-io Booth

FINALLY SOMETHING IMPRESSIVE!

Even more amazing, the videos were all being driven by a single PCIe card which had 1.2TB of SSD RAM on it. This one “ioSAN” card from Fusion-io completely saturated 16 servers (126 cpu cores)…and they were able to achieve this through the bandwidth performance and ultra low-latency of 20Gb/s InfiniBand via Mellanox’s ConnectX adapters. In fact, they told me the 20Gb/s InfiniBand connection would allow them to saturate even more servers, yet they only brought 16.

iodrive_duo_flat-24

The video below, featuring Fusion-io’s CTO David Flynn, tells the complete story:

The ioSAN can be used as networked, server-attached storage or integrated into networked storage infrastructure, making fundamental changes to the enterprise storage area. This is a great example of how Mellanox InfiniBand is the enabling technology for next generation storage.

Talk with you again soon,

Brian Sparks
brian@mellanox.com

The Automotive Makers Require Better Compute Simulations Capabilities

This week I presented in the LS-DYNA user conference. LS-DYNA is one of the most used applications for automotive related computer simulations – simulations that are being used throughout the vehicle design process and decreases the need to build expensive physical prototypes. Computer simulation usage has decreased the vehicle design cycle from years to month, and is responsible for cost reduction throughout the process. Almost every part in the vehicle is designed with computer aided simulations. From crash/safety simulation to engine and gasoline flow, from air condition to water pumps, almost every part of the vehicle is simulated.

Today challenges in vehicle simulations are around the motivation to build more economical and ecological designs, how to do design lighter vehicles (less material to be used) while meeting the increased safety regulation demands. For example, national and international standardizations have been put in place, which provide structural crashworthiness requirements for railway vehicle bodies.

In order to be able to meet all of those requirements and demands, higher compute simulation capability is required. It is not a surprise that LS-DYNA is being mostly used in high-performance clustering environments as they provide the needed flexibility, scalability and efficiency for such simulations. Increasing high-performance clustering productivity and the capability to handle more complex simulations is the most important factor for the automotive makers today. It requires using balanced clustering design (hardware – CPU, memory, interconnect, GPU; and software), enhanced messaging techniques and the knowledge on how to increase the productivity from a given design.

For LS-DYNA, InfiniBand interconnect-based solutions have been proven to provide the highest productivity compared to Ethernet (GigE, 10GigE, iWARP). With InfiniBand, LS-DYNA demonstrated high parallelism and scalability, which enabled it to take full advantage of multi-core high-performance computing clusters. In the case of Ethernet, the better choice between GigE, 10GigE and iWARP is 10GigE. While iWARP aim to provide better performance, typical high-performance applications are using send-receive semantics which iWARP does not provide any added value with, and even worse, it just increase the complexity and the CPU overhead/power consumption.

If you want to get a copy of a paper that present the capabilities to increase simulations productivity while decrease power consumption, don’t hesitate to send me a note (hpc@mellanox.com).

Gilad Shainer
shainer@mellanox.com

We will be at Interop!

Come visit us at the Mellanox booth (#2307) at Interop in Las Vegas, Nevada from May 19-21, 2009. We will be demonstrating our latest additions to our 10 Gigabit Ethernet product line, including our recently announced dual-port ConnectX ENt 10GBASE-T adapter and PhyX, our 6-port, high-density, multi-protocol Physical layer silicon device designed for 10GigE switches and pass-through modules. We will also be showing these products at the Ethernet Alliance booth (#527).

Other demos in our booth include our latest BridgeX gateway where we will show I/O consolidation over FCoE, and a Native InfiniBand SSD Storage showcasing Mellanox 40Gb/s InfiniBand ConnectX adapters with a Fusion-io SAN.

We have a great line-up of presenters in our booth that will provide you with a great array of knowledge. For example, David Flynn, CTO of Fusion-i-o will deliver a presentation on “Moving storage networking into the microsecond timescale – The fusion of solid state storage and high performance networking.” Bruce Fingles, VP of Product Management at Xsigo, will present “Next Generation Data Center Connectivity: How Virtual I/O Cuts Costs by 50% and Simplifies Management.” Arista will also be presenting on their latest line of 10GBASE-T switches, and of course, the Mellanox staff have a few presentations up their sleeve. Did I mention there will be prizes? All presentations start on the half-hour, and will repeat throughout the day to ensure that we can fit into your busy and hectic schedule at the show.

We look forward to seeing you there!

Brian Sparks

brian@mellanox.com

Web Retailer Uses InfiniBand to Improve Response Time to Its Customers

Recently while talking with an IT operations manager for a major Web retailer, I was enlightened on the importance of reducing latency in web-based applications. He explained that they were challenged to find a way to reduce the response time to their web customers. They investigated this for quite some time before discovering that the major issue seemed to be the time it takes to initiate a TCP transaction between their app servers and database servers. Subsequently their search focused on finding the best interconnect fabric to minimize this time.

Well, they found it in InfiniBand. With its 1 microsecond latency between servers, this web retailer saw tremendous opportunity to improve response time to its customers. In their subsequent proof of concept testing, they found that indeed they could reduce latency between their app servers and database servers. Resulting improvement to their customers is over 30%. This is a huge advantage in their highly competitive market. I would tell you who they are but they would probably shoot me.

More and more enterprise data centers are finding that low latency, high-performance interconnects, like InfiniBand, can improve their customer-facing systems and their resulting web business.

If you want to hear more, or try it for yourself, send me an email.

Thanks,

Wayne Augsburger
Vice PresidentĀ of Business Development
wayne@mellanox.com