InfiniBand Switch Systems


Mellanox's family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Mellanox switches includes a broad portfolio of Edge and Director switches supporting 20,40 and 56Gb/s port speeds and ranging from 8 ports to 648 ports. These switches allow IT managers to build the most cost-effective and scalable switch fabrics ranging from small clusters up to 10's-of-thousands of nodes, and can carry converged traffic with the combination of assured bandwidth and granular quality of service ensuring the highest productivity.

Mellanox's family of switches is designed for performance, serviceability, energy savings and high-availability. These switches are optimized for fitting into industry-standard racks and for scale-out computing solutions from industry leaders.

By combining industry standard InfiniBand technology with integrated Ethernet gateways, Mellanox switches provide scalable fabric for powering the world’s largest and fastest high performance computing systems and next generation data centers.

Value Proposition

  • Mellanox switches come in with port configuration from 8 to 648 at 56Gb/s per port with the ability to build clusters that can scale out to thousands of nodes
  • Mellanox switches support LAN and SAN traffic consolidation with unlimited scalability  across application, database and storage servers ideal for Enterprise Data Centers (EDC) and cloud computing applications 
  • Mellanox switches delivers high bandwidth with low latency to get highest server efficiency and application productivity ideal for High-Performance Computing (HPC) applications.
  • Mellanox Unified Fabric Manager (UFM) ensures optimal cluster and data center performance with high availability and reliability 
  • Best price/performance solution with error-free 40Gb/s and 56Gb/s fabrics

Benefits

  • Built with Mellanox's 4th and 5th generation InfiniScale® and SwitchX™ switch silicones
  • Industry-leading energy efficiency, density, and cost saving switches company
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications


Edge Switches

Mellanox's family of edge switch systems provides the highest-performing fabric solutions in a 1RU form factor by delivering up to 4Tb/s aggregate data of non-blocking bandwidth with 100-200ns port-to-port latency. Each port supports up to 56Gb/s (QSFP connector) full bidirectional bandwidth. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. Mellanox's edge switches are offered as un-managed or managed switches to meet a variety of deployment scenarios.

 VIEW EDGE SWITCHES TABLE


Edge Switches
  IS5022 IS5023 IS5024 IS5025 SX6005 SX6025  
 
Ports 8 18 36 36 12 36  
Height 1U 1U 1U 1U 1U 1U  
Switching Capacity 640Gb/s 1.44Tb/s 2.88Tb/s 2.88Tb/s 1.34Tb/s 4.032Tb/s  
Link Speed 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s 56Gb/s  
Management No No No No No No  
Management Port - - - - - -  
Console Cables No No No No - No  
PSU Redundancy No Optional Optional Optional No Optional  
Fan Redundancy No Optional Optional Optional No Optional  
Integrated Gateway - - - - - -  
  SX6012 SX6018 IS5030 IS5035 4036 4036E SX6036
Ports 12 18 36 36 36 34 + 2Eth 36
Height 1U 1U 1U 1U 1U 1U 1U
Switching Capacity 1.346Tb/s 2.016Tb/s 2.88Tb/s 2.88Tb/s 2.88Tb/s 2.72Tb/s 4.032Tb/s
Link Speed 56Gb/s 56Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s
Management Yes
648 nodes
Yes
648 nodes
Yes
108 nodes
Yes
648 nodes
Yes
648 nodes
Yes
648 nodes
Yes
648 nodes
Managment Port 1 2 1 2 1 1 2
Console Cables - Yes Yes Yes Yes Yes Yes
PSU Redundancy Optional Optional Optional Optional Optional Optional Optional
Fan Redundancy No Optional Optional Optional Yes Yes Optional
Integrated Gateway Optional Optional - - - Yes Optional




Director Switches

High Density Chassis switch systems

Mellanox's family of director switches provide the highest density switching solution, scaling from 8.64Tb/s up to 72.5Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds up to 56Gb/s. The modular chassis switch family provides the ability to scale clusters and invest as the cluster size grows.


 VIEW DIRECTOR SWITCHES TABLE


Director Switches
  IS5100 SX6506 IS5200 SX6512 IS5300
Ports 108 108 216 216 324
Height 6U 6U 9U 9U 16U
Switching Capacity 8.64Tb/s 12.12Tb/s 17.28Tb/s 24.24Tb/s 25.9Tb/s
Link Speed 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s
Interface Type QSFP QSFP+ QSFP QSFP+ QSFP
Management 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes
Management HA Yes Yes Yes Yes Yes
Console Cables Yes Yes Yes Yes Yes
Spine Modules 3 3 6 6 9
Leaf Modules (Max) 6 6 12 12 18
PSU Redundancy Yes (N+1) Yes (N+N) Yes (N+1) Yes (N+N) Yes (N+2)
Fan Redundancy Yes Yes Yes Yes Yes
  SX6518 IS5600 SX6536 4200 4700
Ports 324 648 648 144/162 324 (648 HS)
Height 16U 29U 29U 11U 19U
Switching Capacity 36.36Tb/s 51.8Tb/s 72.52Tb/s 11.52Tb/s 25.92Tb/s (51.8Tb/s)
Link Speed 56Gb/s 40Gb/s 56Gb/s 40Gb/s 40Gb/s
Interface Type QSFP+ QSFP QSFP+ QSFP QSFP
Management 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes
Management HA Yes Yes Yes Yes Yes
Console Cables Yes Yes Yes Yes Yes
Spine Modules 9 18 18 4 9
Leaf Modules (Max) 18 36 36 9 18
PSU Redundancy Yes (N+N) Yes (N+2) Yes (N+N) Yes (N+N) Yes (N+N)
Fan Redundancy Yes Yes Yes Yes Yes



Advanced Management Capabilities

Mellanox's switch family enables efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots. Whether used for parallel computation or as a converged fabric, Mellanox switches have the industry's best traffic-carrying capacity.

All Mellanox switches can also be coupled with Mellanox's Unified Fabric Manager (UFM™) software for managing scale-out InfiniBand computing environments. UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric. UFM boosts application performance and ensures that the fabric is up and running at all times. UFM can also enhance MLNX-OS with UFM Diag diagnostic tools to check node-node, node-switch connectivity and cluster topology view.

Virtual Protocol Interconnect® (VPI)

VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, Data Center Bridging (DCB) fabrics and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.
More Info »