ConnectX® InfiniBand Adapter Devices

Overview

ConnectX delivers low latency and high bandwidth for performance-driven server and storage clustering applications. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Servers supporting PCI Express 2.0 with 5GT/s can take advantage of 40Gb/s InfiniBand.

With ConnectX, clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements with reduced completion time and lower cost per operation.


Related Documents

  • Single chip architecture
  • Integrated SerDes
  • No local memory needed
  • 1.2μs MPI ping latency
  • 10, 20, or 40Gb/s InfiniBand ports
  • PCI Express 2.0 (up to 5GT/s)
  • CPU offload of transport operations
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless offload
  • World-class cluster performance
  • High-performance networking and storage access
  • Guaranteed bandwidth and low-latency services
  • Reliable transport
  • End-to-end storage integrity
  • I/O consolidation
  • Virtualization acceleration
  • Scales to tens-of-thousands of nodes
  • Small PCB footprint
  • Dual 4X InfiniBand ports
  • IBTA v1.2.1 Compatible Design
  • PCI Express 2.0 x8 (1.1 compatible)
  • Management interfaces (DMTF compatible, Fast Management Link)
  • 4x 16MB serial Flash interface
  • Dual I2C interfaces
  • IEEE 1149.1 boundary-scan JTAG
  • Link status LED indicators
  • General purpose I/O
  • 21 x 21mm HFCBGA
  • RoHS-5 compliant
  • Requires 3.3V, 2.5V, 1.8V, 1.2V supplies