Mellanox Announces Availability of Turnkey NFS-RDMA SDK for InfiniBand that Delivers 10X Throughput Improvement

Linux Network File System with InfiniBand RDMA Delivers Significant Cost, Scaling and Performance Benefits to End User Applications

LINUXWORLD CONFERENCE & EXPO, SAN FRANCISCO, CA August 6, 2007 Mellanox™ Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of semiconductor-based, server and storage interconnect products, today announced the general availability of the NFS-RDMA SDK (Network File System over Remote Direct Memory Access Software Development Kit) for its InfiniBand adapter products. The SDK supports the OpenFabrics Enterprise Distribution (OFED) version 1.2 software stack and delivers 1.3GB/s (gigabytes per second) of read throughput and 600MB/s  (megabytes per second) of write throughput over a single InfiniBand link – a ten fold improvement to existing NFS over Gigabit Ethernet solutions available in the market.

“Mellanox InfiniBand solutions offer the best price/performance when it comes to delivering compute and storage capacity scaling, and performance using multi-core CPU-based commodity servers,” said Sujal Das, director of software product management at Mellanox Technologies. “With the release of the NFS-RDMA SDK for InfiniBand, we are accelerating the time to market for OEMs and end users, enabling them to gain a competitive edge and better ROI for their network file systems based applications.”

Unprecedented Network File System Performance
Using the Mellanox MTD2000 storage platform reference design (based on Intel CPUs, Mellanox InfiniHost III adapters, OFED 1.2, RHEL5 or SLES 10 SP1) as an NFS-RDMA server, and up to four NFS-RDMA clients, read performance of 1.3GB/s has been achieved for file sizes ranging from 64 to 1024 megabytes, and write performance of 550 to 590MB/s for file sizes ranging from 64 to 512 megabytes (using the IOzone file system benchmark available at http://www.iozone.org/). Also, read and write throughput is maintained for all record sizes ranging from 4 kilobytes to 512 kilobytes.

Cost-effective Scaling and Resource Consolidation for Real Applications
Through the use of low latency, node-to-node connectivity and low price and power per megabyte of available bandwidth, InfiniBand enables cost effective scaling of both compute and storage capacity. The high storage throughput of InfiniBand and NFS-RDMA equates to an estimated 5X price/performance improvement of the I/O compared to existing enterprise-class storage platforms based on Gigabit Ethernet interfaces. Use of white-box Linux-based commodity storage platforms such as the MTD2000 can further reduce the capital cost of deployment.

Applications where storage and compute capacity growth are critical can benefit from the Mellanox’s NFS-RDMA SDK. These applications include clustered databases, CAD (computer-aided design), CAE (computer-aided engineering), DCC (digital content creation), EDA (electronic design automation), financial services, order management, and web services. For example, in web services applications where managing storage capacity growth and delivering high levels of transaction performance at minimal cost are critical, NFS-RDMA over InfiniBand has been proven to deliver compelling values.

“To deliver end user satisfaction with applications that demand multi-terabytes of files-based storage capacity, cost and power savings are as important as ability to scale in a flexible way,” said Ekechi Nwokah, Storage Architect at Alexa Internet, an Amazon.com subsidiary. “Our data mining applications using NFS-RDMA and Mellanox-based InfiniBand solutions are enabling us to deliver on that promise with maximum ROI.”

Availability
The NFS-RDMA SDK is an open source and free package available now from Mellanox (http://www.mellanox.com/products/nfs_rdma_sdk.php). The package accelerates OEM development of a complete NFS-RDMA storage solution and includes both the NFS-RDMA client and server software stack that is compatible with virtually any commodity, Linux-based x86 server. Come see a demonstration of the NFS-RDMA SDK using the Mellanox MTD2000 platform reference design (http://www.mellanox.com/products/mtd2000.php) at the Mellanox booth (#314) at Linux World 2007 in San Francisco, August 6-9, 2007.

About Mellanox
Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand and Ethernet connectivity products that facilitate data transmission between servers, communications infrastructure equipment and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at www.mellanox.com.

Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995:
All statements included or incorporated by reference in this release, other than statements or characterizations of historical fact, are forward-looking statements. These forward-looking statements are based on our current expectations, estimates and projections about our industry and business, management's beliefs and certain assumptions made by us, all of which are subject to change.
Forward-looking statements can often be identified by words such as "anticipates," "expects," "intends," "plans," "predicts," "believes," "seeks," "estimates," "may," "will," "should," "would," "could," "potential," "continue," "ongoing," similar expressions and variations or negatives of these words. These forward-looking statements are not guarantees of future results and are subject to risks, uncertainties and assumptions that could cause our actual results to differ materially and adversely from those expressed in any forward-looking statement.
The risks and uncertainties that could cause our results to differ materially from those expressed or implied by such forward-looking statements include the on-going availability of the software product announced in this release, the ability of the product announced in this release to continue to accelerate the time to market for OEMs and end users and to achieve other highlighted benefits, the continued, increased demand for industry standards-based technology, our ability to react to trends and challenges in our business and the markets in which we operate; our ability to anticipate market needs or develop new or enhanced products to meet those needs; the adoption rate of our products; our ability to establish and maintain successful relationships with our distributors; our ability to compete in our industry; fluctuations in demand, sales cycles and prices for our products and services; our ability to protect our intellectual property rights; general political, economic and market conditions and events; and other risks and uncertainties described more fully in our documents filed with or furnished to the Securities and Exchange Commission.
More information about the risks, uncertainties and assumptions that may impact our business are set forth in our Form 10-Q filed with the SEC on May 8, 2007, and our Form 10-K filed with the SEC on March 26, 2007, including “Risk Factors”.  All forward-looking statements in this press release are based on information available to us as of the date hereof, and we assume no obligation to update these forward-looking statements.
Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies. All other trademarks are property of their respective owners.
###

For more information:
Mellanox Technologies
Brian Sparks
408-970-3400
media@mellanox.com


Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.