Mellanox ConnectX EN 10GbE NIC with VMware Virtualization Software Proven to Deliver Breakthrough Server Utilization

Benchmarks Using Prototype Driver for VMware ESX Server 3.5 Exemplify Real World Proof Points for Optimizing Capital Expenditure, Power Consumption, and Total Cost of Ownership

VMworld Europe 2008, Cannes, France February 26, 2008 Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX) a leading supplier of semiconductor-based, server and storage interconnect products, today announced that its ConnectX EN 10GbE NIC adapters facilitate maximum server application processing performance in virtualized data center environments. Recent benchmarks demonstrate Mellanox adapters maintain an excellent 9.6 Gb/s throughput as the number of virtual machines (VMs) in VMware ESX Server 3.5 is scaled up to 16 in multi-core CPU environments. This helps deliver improved server utilization as more VMs can be deployed per physical server and application I/O performance can be maintained or enhanced. ConnectX EN’s leading performance optimizes data center server infrastructure and provides significant dollar and power consumption savings by enabling more VMs per physical server. These adapters target the estimated install base of 3 million virtualized servers as of 2007 expected to grow to 15-20 million by 2012.

“ConnectX EN supports all networking features enabled in VMware ESX Server 3.5, including support for VMware’s NetQueue specification for boosting I/O performance,” said Thad Omura, vice president of product marketing at Mellanox Technologies. “ConnectX EN delivers compelling VM scaling, server utilization and cost benefits in multi-core CPU environments improving productivity, efficiency and enabling corporations to do more with less.”

ConnectX EN 10GbE NIC adapters offer leading-edge hardware-based I/O virtualization features. These features are compatible with and complement PCI Single Root I/O Virtualization features and AMD®-V and Intel®-VT features to deliver advanced, secure and granular levels of I/O services to applications running in VMs. Both LAN and iSCSI SAN (storage area network) traffic can be consolidated over the same 10GbE NIC, and in the future, FC SAN connectivity will be enabled using the emerging FCoE (Fibre Channel over Ethernet) standard.

In tests conducted by Mellanox using Dell® servers (Dell 1950 8 cores Clovertown Intel Xeon® CPU E5410 @ 2.33GHz) running VMware ESX Server 3.5, 9.6 Gb/s throughput was achieved using traffic from 5 virtual machines (SLES10 with 2.6.23 kernel based)*. Bandwidth was measured using the IXIA IxChariot benchmarking tool between VMs running on two physical servers. The throughput was sustained at 9.6 Gb/s as the number of virtual machines were scaled to 16, enabling significantly higher server utilization without any I/O bottlenecks. CPU utilization results indicate that this I/O throughput can be sustained with 5 CPU cores, leaving 3 CPU cores free for other application usage. On an average, 2 VMs can run per CPU core and sustain the full I/O throughput.

Mellanox offers a complete family of ConnectX EN 10 Gigabit Ethernet adapters to support a wide variety of cabling options including UTP, CX4 for copper and SR, LR and LRM for fiber optics. Mellanox ConnectX EN provides data centers with a rich set of availability, performance and QoS features enabling virtualization, and supports emerging technologies like FCoE (Fibre Channel-Over-Ethernet) and CEE (Converged Enhanced Ethernet) on a single adapter providing IO convergence.

VMware ESX Server 3.5 certified drivers for Mellanox ConnectX EN will be generally available in the second quarter of 2008.

Visit Mellanox Technologies at VMworld Europe 2008
Come visit Mellanox (Booth #5) at VMworld Europe 2008, February 26-28, 2008, to see the latest demonstrations of Mellanox InfiniBand and Ethernet adapter cards with VMware ESX Server 3.5.

During the conference, Sujal Das, senior director of product management at Mellanox Technologies, will present “Delivering Maximum Bang for the Buck with Unified I/O Options in VMware ESX Server 3.5” The presentation will be held on Thursday, February 28, at 10:15 am, in room Redaction 2.

About Mellanox
Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand and Ethernet connectivity products that facilitate data transmission between servers, communications infrastructure equipment and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at

Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

* Using 9000 Byte MTU


Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995:
All statements included or incorporated by reference in this release, other than statements or characterizations of historical fact, are forward-looking statements. These forward-looking statements are based on our current expectations, estimates and projections about our industry and business, management's beliefs and certain assumptions made by us, all of which are subject to change.

Forward-looking statements can often be identified by words such as "anticipates," "expects," "intends," "plans," "predicts," "believes," "seeks," "estimates," "may," "will," "should," "would," "could," "potential," "continue," "ongoing," similar expressions and variations or negatives of these words. These forward-looking statements are not guarantees of future results and are subject to risks, uncertainties and assumptions that could cause our actual results to differ materially and adversely from those expressed in any forward-looking statement.

The risks and uncertainties that could cause our results to differ materially from those expressed or implied by such forward-looking statements include the continued growth in the install base of virtualized servers, the continued growth in demand for HPC products, the continued, increased demand for industry standards-based technology, our ability to react to trends and challenges in our business and the markets in which we operate; our ability to anticipate market needs or develop new or enhanced products to meet those needs; the adoption rate of our products; our ability to establish and maintain successful relationships with our OEM partners; our ability to compete in our industry; fluctuations in demand, sales cycles and prices for our products and services; our ability to protect our intellectual property rights; general political, economic and market conditions and events; and other risks and uncertainties described more fully in our documents filed with or furnished to the Securities and Exchange Commission.

More information about the risks, uncertainties and assumptions that may impact our business is set forth in our Form 10-Q filed with the SEC on November 8, 2007, and our Form 10-K filed with the SEC on March 26, 2007, including “Risk Factors”. All forward-looking statements in this press release are based on information available to us as of the date hereof, and we assume no obligation to update these forward-looking statements.

Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.


For more information:
Mellanox Technologies
Brian Sparks

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.