Applications and Markets
A single fabric infrastructure is increasingly considered for network build-outs in not only just high-performance computing environments, which is dominantly InfiniBand based, but also in mainstream enterprise grids, data center enterprise server, storage and virtualized environments. The low-latency, high-performance and efficient CPU utilization, provided by Mellanox InfiniBand and Ethernet solutions, coupled with the economic benefits of consolidation, performance boosts, manageability and network virtualization has helped end-customers build out their applications in most cost-effective manner. Mellanox networking solutions are uniquely positioned to satisfy such networking needs, which can be based on InfiniBand or Ethernet.
Learn how Mellanox's InfiniBand and Ethernet technology can take your solution to the next level of performance, power and cost.
With InfiniBand's proven scalability and efficiency, small and large clusters easily scale up to thousands of nodes. By providing low-latency, high-bandwidth, high message rate, transport offload for extremely low CPU overhead, Remote Direct Memory Access (RDMA) and advanced communications offloads, Mellanox interconnect solutions are the most deployed high-speed interconnect for large-scale simulations, replacing proprietary or low-performance solutions. Mellanox's Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency and performance for HPC systems today and in the future.
Mellanox data center networking solutions based on Virtual Protocol Interconnect (VPI) technology enables seamless connectivity to 56Gb/s InfiniBand and/or 10/40 Gigabit Ethernet connections based on networking requirements. VPI enables I/O infrastructure flexibility and future-proofing for data center computing environments. It facilitates any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network with the same software infrastructure. Mellanox 10/40GbE solutions provide lower cost, power, latency and CPU utilization for Ethernet-based solutions for blade and standard rack and tower environments. Higher application service levels can be achieved by cloud providers by utilizing 56Gb/s InfiniBand or 10/40GbE/FCoE to consolidate I/O to a single wire and enable IT managers to deliver significantly higher application service levels, while achieving their business goals of increased productivity and reduced CAPEX and OPEX related to technology I/O spending.
Mellanox has supported the government's IT networking needs for more than 10 years and established itself as a trusted leader in delivering high-performance connectivity solutions. Many federal agencies have high-performance networking requirements for complex projects requiring the processing of large amounts of data over distributed systems. Mellanox products are specified for high speed storage networks and the clustering of processors, parallel file processing, GPUs and heterogeneous storage platforms.
The interconnect of storage appliances to external environments will play a significant role with accelerated adoption of Solid State based appliances. The I/O bottleneck will gradually move away from storage components in these appliances to their access points in a data center. Such unique storage requirements can be satisfied with Mellanox InfiniBand, Ethernet or FCoE solutions as they provide high-bandwidth, low-latency, dedicated I/O channels, QoS and RDMA features.
InfiniBand has emerged as an ideal solution for many embedded applications such as high-speed I/O links, reliable backplanes, and scaleable switch fabrics. With reliability, availability, and serviceability (RAS) built into the architecture, InfiniBand’s superior capabilities are enabling outstanding functionality in non-traditional systems.
High-performance compute clusters require a high-performance interconnect technology providing high bandwidth, low latency and low CPU overhead resulting in high CPU utilization for the application’s compute operations.