InfiniBand White Papers
Highly Accurate Time Synchronization with ConnectX®-3 and TimeKeeper® (March 2013)
Upgrading your trading platforms to reliable and precise time is achievable at a low cost and a rapid deployment model via the combination of Mellanox's ConnectX®-3 network adapter cards and TimeKeeper® Client software. TimeKeeper can assure sub-microsecond time precision from both the newer IEEE 1588 Precision Time Protocol (PTP) or the standard Network Time Protocol (NTP) over shared (not dedicated) network links. Flexibility in time sources and automatic adapability to network quality allows for incremental changes to enterprise systems, and immediate high precision timing in critical components while less critical components see incremental performance improvement. For high quality links and time feeds, applications can see time locked to reference well within 500 nanoseconds of variation.
Power Saving Features in Mellanox Products (January 2013)
The growth in Cloud and Web 2.0 storage and compute requirements in recent years has led to an increase in demand for larger, stronger, and more cost efficient data centers.
- FDR InfiniBand is Here
The high-speed InfiniBand server and storage connectivity has become the de facto scalable solution for systems of any size – ranging from small, departmental-based compute infrastructures to the world's largest PetaScale systems. The rich feature set and the design flexibility enable users to deploy the InfiniBand connectivity between servers and storage in various architectures and topologies to meet performance and or productivity goals. These benefits make InfiniBand
- Building a Scalable Storage with InfiniBand
It will come as no surprise to those working in data centers today that an increasing amount of capital and operational expense is associated with building and maintaining storage systems. Many factors drive the need for increased storage capacity and performance. Increased compute power and new software paradigms are making it possible to perform useful analytics on vast repositories of data. The lowering cost per Gigabyte is making it possible for organizations to store more granular data and to keep data for longer periods of time.
- Security in Mellanox Technologies InfiniBand Fabrics
InfiniBand is a new systems interconnect designed for Data Center Networks, and Clustering environments. Already, it is the fabric of choice for high-performance computing, education, life sciences, oil and gas, auto manufacturing and increasingly financial services applications.
- Introduction to Cloud Design
Cloud computing is a collection of technologies and practices used to abstract the provisioning and management of computer hardware. The goal is to simplify the users experience so they can get the benefit of compute resources on demand; or in the language of cloud computing "as a service".
- TIBCO, HP and Mellanox High Performance Extreme Low Latency Messaging
With the recent release of TIBCO FTL™, TIBCO is once again changing the game when it comes to providing high performance messaging middleware. Many solutions have emerged to try and provide next generation systems with extreme low latency but they are doing this by sacrificing the traditional features and functions that mission critical middleware solutions require. TIBCO's approach is to offer a middleware solution that offers extreme low latency without sacrifice, allowing for the scalability not only to meet the demands for low latency data distribution but also to meet the demands as the application grows from a few instances to thousands of instances.
Mellanox InfiniBand FDR 56Gb/s For Server and Storage Interconnect Solutions (June 2011)
Choosing the right interconnect technology is essential for maximizing systems and applications performance and efficiency. Slow interconnects delay data transfers between servers, causing poor utilization of the system resources and slow execution of application.
Informatica, HP, and Mellanox/Voltaire Benchmark Report: Ultra Messaging accelerated across three supported interconnects
The securities trading market is experiencing rapid growth in volume and complexity with a greater reliance on trading software, which is supported by sophisticated algorithms. As this market grows, so do the trading volumes, bringing existing IT infrastructure systems to their limits.
Introduction to InfiniBand for End Users: Industry-Standard Value and Performance for High Performance Computing and the
InfiniBand is not complex. Despite its reputation as an exotic technology, the concepts behind it are surprisingly straight forward. One purpose of this book is to clearly describe the basic concepts behind the InfiniBand Architecture.
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance (June 2010)
The cluster interconnect is very critical for efficiency and performance of the application in the multi-core era. When more CPU cores are present, the overall cluster productivity increases only in the presence of a high-speed interconnect. We have compared the elapsed time with LSDYNA using 40Gb/s InfiniBand and Gigabit Ethernet.
CORE-Direct: The Most Advanced Technology for MPI/SHMEM Collectives Offloads (May 2010)
Mellanox CORE-Direct technology provides the most complete and advanced solutions for offloading the MPI collectives operations from the software library to the network. CORE-Direct not only accelerates MPI applications but also solves the scalability issues in large scale systems by eliminating the issues of OS noise and jitter.
NVIDIA GPUDirect Technology - Accelerating GPU-based Systems (May 2010)
The new NVIDIA GPUDirect technology when used with Mellanox InfiniBand enables NVIDIA Tesla and Fermi GPUs to communicate faster by eliminating the need for a CPU to be involved in the communication loop and the need for the buffer copy. The result is increased overall system performance and efficiency by reducing the GPU-to-GPU communication time by 30%.
Cut I/O Power and Cost while Boosting Server Performance (April 2009)
I/O technology plays a key role in the reduction of space and power in the data center, reducing TCO, and enhancing data center agility.
The Case for Low-Latency Ethernet (March 2009)
The industry momentum behind Fibre over Ethernet (FCoE) sets some significant precedence that raises questions about what is the best approach for server to server messaging (or inter process communication or IPC) using zero-copy send/receive and remote DMA (RDMA) technologies over Ethernet.
Virtualizing Data Center Memory for Performance and Efficiency (February 2009)
By combining RNA Networks’ Memory Virtualization Platform with Mellanox Technologies’ unrivaled connectivity performance, data center architects can achieve new levels of performance with high efficiency and lower costs.
Accelerating Automotive Design with InfiniBand (February 2009)
CAE simulation and analysis are highly sophisticated applications which enable engineers to get insight into complex phenomena and to virtually investigate physical behavior. In order to produce the best results possible these simulation solutions require high-performance compute platforms. In this paper we investigate the optimum usage of high-performance clusters for maximum efficiency and productivity, for CAE applications, and for automotive design in particular.
The Case for InfiniBand over Ethernet (April 2008) (
There are two competing technologies for IPC – InfiniBand and iWARP (based on 10GigE). If one were to apply the same business and technical logic behind the initial success of FCoE, one would conclude that InfiniBand over Ethernet (IBoE) makes the most sense. Here is why.
Importance of Unified I/O in VMware® ESX Servers (March 2008) (
When it comes to unifying I/O on the servers, there are only two options – 10GigE NICs or InfiniBand HCAs. What should you deploy, especially in VMware ESX server environments?
InfiniBand Software and Protocol White Paper (December 2007)
The InfiniBand software stack is designed ground up to enable ease of application deployment. IP and TCP socket applications can avail of InfiniBand performance without requiring any change to existing applications that run over Ethernet.
InfiniBand for Storage Applications (December 2007)
Storage solutions can benefit today from the price, performance and high availability advantage of Mellanox’s industry-standard InfiniBand products.
Using RDMA to increase processing performance (April 2007)
Applications are increasing the demand for CPU processing performance and the amount of data being transferred between subsystems. Offloading data movement to I/O hardware increases the amount of CPU resources available for these applications, boosting the system’s performance.
Optimum Connectivity in the Multi-core Environment (March 2007)
Mulit-core is changing everything. What do you think the effect mulit-core has on the interconnect requirements for your cluster? Hint: More cores need more interconnnect.
Consolidating Network Fabrics to Streamline Data Center Connectivity (February 2007)
Cost and performance issues are pushing developers to seek convergence of interconnects in data centers. Both 10 Gigabit Ethernet and InfiniBand appear to have potential, but the demands are militating against Fibre Channel.
I/O Virtualization Using Mellanox InfiniBand and Channel I/O Virtualization (CIOV) Technology (January 2007)
Server virtualization technologies offer many benefits that enhance agility of data centers to adapt to changing business needs, while reducing total cost of ownership.
Single-Points of Performance (December 2006)
The most common approach for comparing between different interconnect solutions is the “single-points” approach.
Real Application Performance and Beyond (December 2006)
The interconnect bandwidth and latency have traditionally been used as two metrics for assessing the performance of the system’s interconnect fabric. However, these two metrics are typically not sufficient to determine the performance of real world applications.
Weather Research and Forecast (WRF) Model Port to Windows:
Preliminary Report (November 2006)
The Weather Research and Forecast (WRF) project is a multi-year/multi-institution collaboration to develop a next generation regional forecast model and data assimilation system for operational numerical weather prediction (NWP) and atmospheric research.
Why Compromise? - A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics (November 2006)
A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics.
Architecture and Implementation of Sockets Direct Protocol in Windows (May 2006)
Sockets Direct Protocol (SDP) enables socket based applications to transparently utilize the RDMA and transport offload capabilities of the InfiniBand fabric.
Scale up: Building a State-of-the Art Enterprise Supercomputer (May 2006)
Building a state-of-the-art enterprise supercomputer requires a partnership among vendors that supply commodity parts.
InfiniBand in the Enterprise Data Center (April 2006)
InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership.
Scaling 10Gb/s Clusters at Wire-Speed (April 2006)
Data center and high performance computing clusters that cannot compromise on scalable and deterministic performance need the ability to construct large node count non-blocking switch configurations.
Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand
Systems? (December 2005)
Memory-less adapters allow more efficient use of overall system memory and show practically no performance impact (less than 0.1%) for the NAS Parallel Benchmarks on 8 processes.
InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time (December 2005)
Server and storage clusters benefit today from industry-standard InfiniBand’s price, performance, stability, and widely available software leading to a convergence in the data center.
Deploying Quality of Service and Congestion Control in InfiniBand-based Data Center Networks (November 2005)
The InfiniBand architecture defined by IBTA includes novel Quality of Service and Congestion Control features that are tailored perfectly to the needs of Data Center Networks.
Transparently Achieving Superior Socket Performance Using Zero Copy Socket Direct Protocol over 20Gb/s InfiniBand Links (September 2005)
An implementation of Zero Copy support for synchronous send()/recv() socket calls, that uses the remote DMA capability of InfiniBand for SDP data transfers.
Zero Copy Sockets Direct Protocol over InfiniBand - Preliminary Implementation and Performance Analysis (August 2005)
This paper presents the major architectural aspects of the SDP protocol, the ZCopy implementation, and a preliminary performance evaluation.
- A Multi-Partner Soft Error Rate Analysis of an InfiniBand Host Channel Adapter
- MPI over InfiniBand: Early Experiences
- InfiniBand Benchmark Subgroup Charter: Summary
- IO Features Matrix - I/O Features now includes 3GIO
- InfiniBand FAQ
Past White Papers
- InfiniHost III Ex MemFree Mode Performance
- InfiniBand Experiences of PC2 - Paderborn Center for Parallel Computing (PC²)
- InfiniBand, PCI Express, & EM64T - Perfectly Balanced Computing Architecture
- A New Approach to Clustering - Distributed Federated Switches
- InfiniBand Clustering - Delivering Better Price/Performance than Ethernet
- Oracle 10g: Infrastructure for Grid Computing
- HP Cluster Interconnects: The Next 5 Years
- Oracle Database 10g: The Database for the Grid
- Intel® Architecture Based InfiniBand* Cluster - TeraFlops Off-The-Shelf (TOTS)
- Horizontal Scaling Fabrics for Sun Fire™ V60x and V65x Servers: InfiniBand
- Oracle InfiniBand White Paper
- InfiniHost™ III HCA Architecture - The Driving Force for PCI Express™ Delivering Over 20 Gb/s of Bandwidth
- Understanding PCI Bus, PCI-Express and InfiniBand Architecture - Interaction among the three technologies
- Mellanox and HPC Clustering - Enabling TeraFlop Computing at a Fraction of the Cost
- The Total Enterprise Solution: IBM DB2 Universal Database Cluster
- PICMG 3.2 Advanced Telecommunications and Computing Architecture
- InfiniBand and TCP in the Data Center
- Realizing the Full Potential of Server, Switch & I/O Blades with InfiniBand Architecture - Server Blade Architecture
- Comparative I/O Analysis - InfiniBand compared with other I/O technologies
- InfiniBand™ in the Internet Data Center - Underlying infrastructure of InfiniBand architecture
- Introduction to InfiniBand - An overview of InfiniBand technology