InfiniBand White Papers
Unlock In-Server Flash with InfiniBand and Symantec Cluster File System (December 2013)
While 10Gb/s and 40Gb/s Ethernet may look like an alternative, InfiniBand (IB) currently supports up to 56Gb/s, with a roadmap to 100 and 200Gb/s . Both Ethernet and IB have a considerable advantage over FC. Mellanox InfiniBand provides a high throughput and low latency interconnect to ship data across servers and storage systems. Although traditionally used in high-performance computing (HPC) environments, InfiniBand provides the capability to unlock the potential of in-server flash.
Turn Your Data Center into a Mega-Datacenter (September 2013)
This paper describes the advantages of Mellanox's MetroX long-haul switch system, and how it allows you to move from
the paradigm of multiple, disconnected, localized data centers to a single multi-point meshed meg adatacenter. In other words, remote data center sites can now be localized through long-haul connectivity,
providing benefits such as faster compute, higher volume data transfer, and improved business continuity.
Fraunhofer ITWM demonstrates GPI 2.0 with Mellanox Connect-IB and Intel® Xeon Phi (June 2013)
Over the last decade, specialized heterogeneous hardware designs ranging from Cell over GPGPU to Intel Xeon Phi have become a viable option in High Performance Computing mostly due to the fact that these heterogeneous architectures allow for a better flops-per-watt ratio than conventional multi-core designs. The upcoming new GASPI standard will able to bridge this gap in the sense that GASPI can provide partitioned global address spaces (so called segments), which span across both the memory of the Host and e.g. an Intel Xeon Phi.
Performance Optimizations via Connect-IB™ and Dynamically Connected Transport™ Service for Maximum Performance on LS-DYNA® (June 2013)
From concept to engineering, and from design to test and manufacturing, the automotive industry relies on powerful virtual development solutions. CFD and crash simulations are performed in an effort to secure quality and accelerate the development process. LS-DYNA® relies on Message Passing Interface (MPI) for cluster or node-to-node communications, the de-facto messaging library for high performance clusters. MPI relies on fast server and storage interconnect in order to provide low latency and high messaging rate. The more complex simulation being performed to better simulate the physical model behavior, the higher the performance demands from the cluster interconnect are.
Highly Accurate Time Synchronization with ConnectX®-3 and TimeKeeper® (March 2013)
Upgrading your trading platforms to reliable and precise time is achievable at a low cost and a rapid deployment model via the combination of Mellanox's ConnectX®-3 network adapter cards and TimeKeeper® Client software. TimeKeeper can assure sub-microsecond time precision from both the newer IEEE 1588 Precision Time Protocol (PTP) or the standard Network Time Protocol (NTP) over shared (not dedicated) network links. Flexibility in time sources and automatic adapability to network quality allows for incremental changes to enterprise systems, and immediate high precision timing in critical components while less critical components see incremental performance improvement. For high quality links and time feeds, applications can see time locked to reference well within 500 nanoseconds of variation.
Power Saving Features in Mellanox Products (January 2013)
This paper introduces the "green" fabric concept, presents the Mellanox power-efficient features under development as part of the European-Commission ECONET project, displays a real-world data center scenario, and outlines additional steps to be taken toward "green" fabrics. The features described in this paper can reduce power consumption by up to 43%. When summed over a real-world data center scenario, a total reduction of 13% of all network components’ power consumption is demonstrated. This reduction can amount to millions of dollars in savings over several years.
- FDR InfiniBand is Here
The high-speed InfiniBand server and storage connectivity has become the de facto scalable solution for systems of any size – ranging from small, departmental-based compute infrastructures to the world's largest PetaScale systems. The rich feature set and the design flexibility enable users to deploy the InfiniBand connectivity between servers and storage in various architectures and topologies to meet performance and or productivity goals. These benefits make InfiniBand
- Building a Scalable Storage with InfiniBand
It will come as no surprise to those working in data centers today that an increasing amount of capital and operational expense is associated with building and maintaining storage systems. Many factors drive the need for increased storage capacity and performance. Increased compute power and new software paradigms are making it possible to perform useful analytics on vast repositories of data. The lowering cost per Gigabyte is making it possible for organizations to store more granular data and to keep data for longer periods of time.
- Security in Mellanox Technologies InfiniBand Fabrics
InfiniBand is a new systems interconnect designed for Data Center Networks, and Clustering environments. Already, it is the fabric of choice for high-performance computing, education, life sciences, oil and gas, auto manufacturing and increasingly financial services applications.
- Introduction to Cloud Design
Cloud computing is a collection of technologies and practices used to abstract the provisioning and management of computer hardware. The goal is to simplify the users experience so they can get the benefit of compute resources on demand; or in the language of cloud computing "as a service".
- TIBCO, HP and Mellanox High Performance Extreme Low Latency Messaging
With the recent release of TIBCO FTL™, TIBCO is once again changing the game when it comes to providing high performance
messaging middleware. Many solutions have emerged to try and provide next generation systems with extreme low latency but they
are doing this by sacrificing the traditional features and functions that mission critical middleware solutions require. TIBCO's
approach is to offer a middleware solution that offers extreme low latency without sacrifice, allowing for the scalability not
only to meet the demands for low latency data distribution but also to meet the demands as the application grows from a few
instances to thousands of instances.
Mellanox InfiniBand FDR 56Gb/s For Server and Storage Interconnect Solutions (June 2011)
Choosing the right interconnect technology is essential for maximizing systems and applications performance and efficiency.
Slow interconnects delay data transfers between servers, causing poor utilization of the system resources and slow execution
Informatica, HP, and Mellanox/Voltaire Benchmark Report: Ultra Messaging accelerated across three supported interconnects
The securities trading market is experiencing rapid growth in volume and complexity with a greater reliance on trading software,
which is supported by sophisticated algorithms. As this market grows, so do the trading volumes, bringing existing IT
infrastructure systems to their limits.
Introduction to InfiniBand for End Users: Industry-Standard Value and Performance for High Performance Computing and the
InfiniBand is not complex. Despite its reputation as an exotic technology, the concepts behind it are
surprisingly straight forward. One purpose of this book is to clearly describe the basic concepts behind the InfiniBand
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance (June 2010)
The cluster interconnect is very critical for efficiency and performance of the application in the multi-core era. When more
CPU cores are present, the overall cluster productivity increases only in the presence of a high-speed interconnect. We have
compared the elapsed time with LSDYNA using 40Gb/s InfiniBand and Gigabit Ethernet.
CORE-Direct: The Most Advanced Technology for MPI/SHMEM Collectives Offloads (May 2010)
Mellanox CORE-Direct technology provides the most complete and advanced solutions for offloading the MPI collectives
operations from the software library to the network. CORE-Direct not only accelerates MPI applications but also solves
the scalability issues in large scale systems by eliminating the issues of OS noise and jitter.
NVIDIA GPUDirect Technology - Accelerating GPU-based Systems (May 2010)
The new NVIDIA GPUDirect technology when used with Mellanox InfiniBand enables NVIDIA Tesla and Fermi GPUs to communicate
faster by eliminating the need for a CPU to be involved in the communication loop and the need for the buffer copy. The
result is increased overall system performance and efficiency by reducing the GPU-to-GPU communication time by 30%.
Cut I/O Power and Cost while Boosting Server Performance (April 2009)
I/O technology plays a key role in the reduction of space and power in the data center, reducing TCO, and enhancing data
The Case for Low-Latency Ethernet (March 2009)
The industry momentum behind Fibre over Ethernet (FCoE) sets some significant precedence that raises questions about what
is the best approach for server to server messaging (or inter process communication or IPC) using zero-copy send/receive
and remote DMA (RDMA) technologies over Ethernet.
Virtualizing Data Center Memory for Performance and Efficiency (February 2009)
By combining RNA Networks’ Memory Virtualization Platform with Mellanox Technologies’ unrivaled connectivity
performance, data center architects can achieve new levels of performance with high efficiency and lower costs.
Accelerating Automotive Design with InfiniBand (February 2009)
CAE simulation and analysis are highly sophisticated applications which enable engineers to get insight into complex
phenomena and to virtually investigate physical behavior. In order to produce the best results possible these simulation
solutions require high-performance compute platforms. In this paper we investigate the optimum usage of high-performance
clusters for maximum efficiency and productivity, for CAE applications, and for automotive design in particular.
The Case for InfiniBand over Ethernet (April 2008) (
There are two competing technologies for IPC – InfiniBand and iWARP (based on 10GigE). If one were to apply the
same business and technical logic behind the initial success of FCoE, one would conclude that InfiniBand over Ethernet
(IBoE) makes the most sense. Here is why.
Importance of Unified I/O in VMware® ESX Servers (March 2008) (
When it comes to unifying I/O on the servers, there are only two options – 10GigE NICs or InfiniBand HCAs. What
should you deploy, especially in VMware ESX server environments?
InfiniBand Software and Protocol White Paper (December 2007)
The InfiniBand software stack is designed ground up to enable ease of application deployment. IP and TCP socket
applications can avail of InfiniBand performance without requiring any change to existing applications that run over
InfiniBand for Storage Applications (December 2007)
Storage solutions can benefit today from the price, performance and high availability advantage of Mellanox’s
industry-standard InfiniBand products.
Using RDMA to increase processing performance (April 2007)
Applications are increasing the demand for CPU processing performance and the amount of data being transferred between
subsystems. Offloading data movement to I/O hardware increases the amount of CPU resources available for these applications,
boosting the system’s performance.
Optimum Connectivity in the Multi-core Environment (March 2007)
Mulit-core is changing everything. What do you think the effect mulit-core has on the interconnect requirements for
your cluster? Hint: More cores need more interconnnect.
Consolidating Network Fabrics to Streamline Data Center Connectivity (February 2007)
Cost and performance issues are pushing developers to seek convergence of interconnects in data centers. Both 10 Gigabit
Ethernet and InfiniBand appear to have potential, but the demands are militating against Fibre Channel.
I/O Virtualization Using Mellanox InfiniBand and Channel I/O Virtualization (CIOV) Technology (January 2007)
Server virtualization technologies offer many benefits that enhance agility of data centers to adapt to changing business
needs, while reducing total cost of ownership.
Single-Points of Performance (December 2006)
The most common approach for comparing between different interconnect solutions is the “single-points”
Real Application Performance and Beyond (December 2006)
The interconnect bandwidth and latency have traditionally been used as two metrics for assessing the performance of the
system’s interconnect fabric. However, these two metrics are typically not sufficient to determine the performance
of real world applications.
Weather Research and Forecast (WRF) Model Port to Windows:
Preliminary Report (November 2006)
The Weather Research and Forecast (WRF) project is a multi-year/multi-institution
collaboration to develop a next generation regional forecast model and data assimilation system for operational numerical
weather prediction (NWP) and atmospheric research.
Why Compromise? - A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics (November 2006)
A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics.
Architecture and Implementation of Sockets Direct Protocol in Windows (May 2006)
Sockets Direct Protocol (SDP) enables socket based applications to transparently utilize the RDMA and transport offload
capabilities of the InfiniBand fabric.
Scale up: Building a State-of-the Art Enterprise Supercomputer (May 2006)
Building a state-of-the-art enterprise supercomputer requires a partnership among vendors that supply commodity parts.
InfiniBand in the Enterprise Data Center (April 2006)
InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership.
Scaling 10Gb/s Clusters at Wire-Speed (April 2006)
Data center and high performance computing clusters that cannot compromise on scalable and deterministic performance need
the ability to construct large node count non-blocking switch configurations.
Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand
Systems? (December 2005)
Memory-less adapters allow more efficient use of overall system memory and show practically no performance impact (less than 0.1%)
for the NAS Parallel Benchmarks on 8 processes.
InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time (December 2005)
Server and storage clusters benefit today from industry-standard InfiniBand’s price, performance, stability, and widely
available software leading to a convergence in the data center.
Deploying Quality of Service and Congestion Control in InfiniBand-based Data Center Networks (November 2005)
The InfiniBand architecture defined by IBTA includes novel Quality of Service and Congestion Control features that are
tailored perfectly to the needs of Data Center Networks.
Transparently Achieving Superior Socket Performance Using Zero Copy Socket Direct Protocol over 20Gb/s InfiniBand Links (September 2005)
An implementation of Zero Copy support for synchronous send()/recv() socket calls, that uses the remote DMA capability of
InfiniBand for SDP data transfers.
Zero Copy Sockets Direct Protocol over InfiniBand - Preliminary Implementation and Performance Analysis (August 2005)
This paper presents the major architectural aspects of the SDP protocol, the ZCopy implementation, and a preliminary
Past White Papers