Category Archives: 10 Gigabit Ethernet

The Video Studio of the Future has Just Arrived

Mellanox has a long heritage in high bandwidth use-cases for high performance computing and enterprise applications, but one little-known development is with real-time video transports. Traditionally within the broadcast industry, they’ve used a proprietary interface called SDI (Serial Digital Interface) to move uncompressed video signals around a broadcast plant. SDI is a family of digital video interfaces first standardized by the Society of Motion Picture and Television Engineers (SMPTE) in 1989 for broadcast-grade video. The speed of SDI technology, however, has not kept up with the accelerating network speeds and bandwidth of Internet Protocol technology (IP for short).


Rich Hastie videostudio-graph-01 REVISED



Continue reading

Mellanox InfiniBand and Ethernet Switches Receive IPv6 Certification

I am proud to announce that Mellanox’s SwitchX® line of InfiniBand and Ethernet switches have received a gold certification for Internet Protocol v6 (IPv6) by the Internet Protocol Forum.  Adding IPv6 support to our SwitchX series is another milestone for Mellanox’s InfiniBand and Ethernet interconnect solutions, and demonstrates our commitment to producing quality, interoperable InfiniBand and Ethernet products optimized for the latest Internet Protocols.

SX1036 - 36-port 40GbE Switch

Mellanox’s drive to satisfy strong requirements has led to receiving the gold certification as part of the IPv6 Ready Logo Program which is a conformance and interoperability testing program designed to increase user confidence by demonstrating that IPv6 is the future of network architecture.

We at Mellanox feel that as global technology adoption rates increase, there is a greater need for larger networks and subsequently more IP addresses. Just as background, Internet Protocol version 4 (IPv4), still in dominant use, is now reaching the limit of its capacity. The next generation of IP – IPv6 – is designed to provide a vastly expanded address space and quadruples the number of network address bits from 32 bits in IPv4 to 128 bits, providing more than enough globally unique IP addresses for every networked device on the planet.


Amit Katz

Director, Product Management

Interconnect analysis: InfiniBand and 10GigE in High-Performance Computing

InfiniBand and Ethernet are the leading interconnect solutions for connecting servers and storage systems in high-performance computing and in enterprise (virtualized or not) data centers. Recently, the HPC Advisory Council has put together the most comprehensive database for high-performance computing applications to help users understand the performance, productivity, efficiency and scalability differences between InfiniBand and 10 Gigabit Ethernet.

In summary, there are a large number of HPC applications that need the lowest possible latency for best performance or the highest bandwidth (for example Oil&Gas applications as well as weather related applications). There are some HPC applications that are not latency sensitive. For example, gene sequencing and some bioinformatics applications are not sensitive to latency and scale well with TCP-based networks including GigE and 10GigE. For HPC converged networks, putting HPC message passing traffic and storage traffic on a single TCP network may not provide enough data throughput for either. Finally, there is a number of examples that show 10GigE has limited scalability for HPC applications and InfiniBand proves to be a better performance, price/performance, and power solution than 10GigE.

The complete report can be found under the HPC Advisory Council case studies or by clicking here.

40GigE is here!

Today we launched ConnectX®-2 EN 40G converged network adapter card, the world’s first 40 Gigabit Ethernet adapter solution. ConnectX-2 EN 40G enables data centers to maximize the utilization of the latest multi-core processors, achieve unprecedented Ethernet server and storage connectivity, and advance LAN and SAN unification efforts. Mellanox’s 40 Gigabit Ethernet converged network adapter sets the stage for next-generation data centers by enabling high-bandwidth Ethernet fabrics optimized for efficiency while reducing costs, power, and complexity.

Available today, ConnectX-2 EN 40G supports hardware-based I/O virtualization, including Single Root I/O Virtualization (SR-IOV), and delivers the features needed for a converged network with support for Data Center Bridging (DCB). Mellanox’s 40 Gigabit Ethernet converged network adapter solution simplifies FCoE deployment with T11 Fibre Channel frame encapsulation support and hardware offloads. The single port ConnectX-2 EN 40G adapter comes with one QSFP connector suitable for use with copper or fiber optic cables to provide the highest flexibility to IT managers.

As part of Mellanox’s comprehensive portfolio of 10 Gigabit Ethernet and InfiniBand adapters, ConnectX-2 EN 40G is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XEN Server. ConnectX-2 EN 40G supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks.

Thanks for coming to see us at VMworld

VMworld was everything we expected and more. The traffic was tremendous and we had a lot of excitement and buzz in our booth (especially after we won the Best of VMworld in the Cloud Computing category). Just in case you were unable to sit through one of Mellanox’s presentations, or from one of our partners (Xsigo, HP, Intalio, RNA Networks, and OpenFabrics Alliance), we went ahead and video taped the sessions, and have posted them below.


 Mellanox – F.U.E.L. Efficient Virtualized Data Centers


 Mellanox – On-Demand Network Services


 Intalio – Private Cloud Platform


 HP BladeSystem and ExSO SL-Series


 Xsigo – How to Unleash vSphere’s Full Potential with Xsigo Virtual I/O


 RNA Networks – Virtual Memory


 OpenFabrics Alliance – All things Virtual with OpenFabrics and IB

Missed Mellanox at Interop?

Just in case you missed us at Interop 2009, below are just a few of the presentations that took place in our booth.

Mellanox 10 Gigabit Ethernet and 40Gb/s InfiniBand adapters, switches and gateways are key to making your data center F.U.E.L. Efficient


Mellanox Product Manager, Satish Kikkeri, provides additional details on Low-Latency Ethernet


Mellanox Product Manager, TA Ramanujam, provides insight on how data centers can achieve true unified I/O today


Fusion-io’s CTO, David Flynn, presents “Moving Storage to Microsecond Time-Scales”


We look forward to seeing you at our next event or tradeshow.

Brian Sparks

The Automotive Makers Require Better Compute Simulations Capabilities

This week I presented in the LS-DYNA user conference. LS-DYNA is one of the most used applications for automotive related computer simulations – simulations that are being used throughout the vehicle design process and decreases the need to build expensive physical prototypes. Computer simulation usage has decreased the vehicle design cycle from years to month, and is responsible for cost reduction throughout the process. Almost every part in the vehicle is designed with computer aided simulations. From crash/safety simulation to engine and gasoline flow, from air condition to water pumps, almost every part of the vehicle is simulated.

Today challenges in vehicle simulations are around the motivation to build more economical and ecological designs, how to do design lighter vehicles (less material to be used) while meeting the increased safety regulation demands. For example, national and international standardizations have been put in place, which provide structural crashworthiness requirements for railway vehicle bodies.

In order to be able to meet all of those requirements and demands, higher compute simulation capability is required. It is not a surprise that LS-DYNA is being mostly used in high-performance clustering environments as they provide the needed flexibility, scalability and efficiency for such simulations. Increasing high-performance clustering productivity and the capability to handle more complex simulations is the most important factor for the automotive makers today. It requires using balanced clustering design (hardware – CPU, memory, interconnect, GPU; and software), enhanced messaging techniques and the knowledge on how to increase the productivity from a given design.

For LS-DYNA, InfiniBand interconnect-based solutions have been proven to provide the highest productivity compared to Ethernet (GigE, 10GigE, iWARP). With InfiniBand, LS-DYNA demonstrated high parallelism and scalability, which enabled it to take full advantage of multi-core high-performance computing clusters. In the case of Ethernet, the better choice between GigE, 10GigE and iWARP is 10GigE. While iWARP aim to provide better performance, typical high-performance applications are using send-receive semantics which iWARP does not provide any added value with, and even worse, it just increase the complexity and the CPU overhead/power consumption.

If you want to get a copy of a paper that present the capabilities to increase simulations productivity while decrease power consumption, don’t hesitate to send me a note (

Gilad Shainer

I/O Virtualization

I/O virtualization is a complimentary solution for server and storage virtualization, which aims to reduce the management complexity of physical connections in and out of virtual hosts. Virtualized data center clusters will have multiple networking connections to LAN and SAN, and virtualizing the network avoids the extra complexity associated with it. While I/O virtualization reduces the management complexity, in order to maintain high productivity and scalability one should pay attention to other characteristics of the network being virtualized.

Offloading the network virtualization from the VMM (virtual machine manager, e.g. Hypervisor) to a smart networking adapter, not only reduces the CPU overhead associated with the virtualization management, but also increases the performance capability of the virtual machines (or guest OSs) and can provide the native performance capabilities to them.

The PCISIG has standards in place to help simplify I/O virtualization. The most interesting solution is Single Root I/O virtualization (SR-IOV). SR-IOV allows a smart adapter to create multiple virtual adapters (virtual functions) for a given physical server. The virtual adapters can be assigned directly to a virtual machine (VM) instead of relying on the VMM to manage everything.

SR-IOV provides a standard mechanism for devices to advertise their ability to be simultaneously shared among multiple virtual machines. SR-IOV allows the partitioning of PCI functions into many virtual interfaces for the purpose of sharing the resources of a PCI device in a virtual environment.

Mellanox interconnect solutions provide full SR-IOV support while adding the required scalability and high throughput capabilities to effectively support multiple virtual machines on a single physical server. With Mellanox 10GigE or 40Gb/s InfiniBand solutions, each of the virtual machines can get the needed bandwidth allocation to ensure highest productivity and performance, just as if it was a physical server. 

Gilad Shainer
Director of Technical Marketing

SSD over InfiniBand

Last week I was at Storage Networking World in Orlando, Florida.  The sessions were a lot better organized with focus on all the popular topics like Cloud Computing, Storage Virtualization and Solid State Storage (SSD).  In our booth, we demonstrated our Layer 2 agnostic storage supporting iSCSI, FCoE (Fibre Channel over Ethernet) and SRP (SCSI RDMA Protocol) all coexisting in a single network. We partnered with Rorke Data who demonstrated a 40Gb/s InfiniBand-based storage array and Texas Memory System’s ‘World’s Fastest Storage’ in our booth demonstrating sustained rates of 3Gb/s and over 400K I/Os using Solid State Drives. 

I attended few of the sessions on the SSD and Cloud Computing stream. SSD was my favorite topic primarily because InfiniBand and SSD together will provide the highest storage performance and has the potential to carve out a niche in the data center OLTP applications market. Clod Barrera, IBM’s Chief Technical Storage Strategist’s presentation on SSD was very good. He had a chart which talked about how HDD I/O rates per GByte had dropped so low and currently staying constant at around 150 to 200 I/Os per drive. On the contrary SSD’s have capability to produce 50K I/Os on Read and 17K I/Os on Write.  Significant synergy can be achieved by combining SSD with InfiniBand technology. InfiniBand delivers the lowest latency of sub 1us and the highest bandwidth of 40Gb/s.  The combination of these technologies will provide significant value in the datacenter and has the potential to change the database and OLTP storage infrastructure.

SSD over InfiniBand delivers:

-  Ultra-fast, lowest latency infrastructure for transaction processing applications

-  Delivering a more compelling Green per GB 

-   Faster recovery time for business continuity applications

-   Disruptive scaling

I see lot of opportunity for InfiniBand technology in the storage infrastructure as SSD provides the much needed discontinuity to the rotary media. 

TA Ramanujam (TAR)