Data Center Overview
Mellanox’s networking solutions based on InfiniBand, Ethernet, or RoCE (RDMA over Converged Ethernet) provide the best price, performance, and power value proposition for network and storage I/O processing capabilities up to 56GbE/s. Advanced data centers can utilize 56GbE/s InfiniBand, 10/40GbE, or RoCE to consolidate I/O to a single wire and enable IT managers to deliver significantly higher application service levels, while reducing CapEx and OpEx related to I/O infrastructures. Mellanox provides a large pool of deployment, manageability and performance tools with networking products for a myriad of software environments to fine tune solutions to customer requirements.
Mellanox 56GbE/s InfiniBand-based server adapters and switches provide fault-tolerant and unified connectivity between clustered database servers and native InfiniBand storage, allowing for very high efficiency of CPU and storage capacity usage. The result is 50% less hardware cost to achieve the same level of performance
The expansion of business-critical information and rich content within extended enterprises continues to change the storage dynamic in a wide range of industries and organizations. This market trend drives the need for higher connectivity speed and the adaptation of the clustering architecture. Mellanox's networking solutions based on InfiniBand, Ethernet, or RoCE provide the best price, performance, and power value proposition for network and storage I/O processing capabilities up to 56Gb/s. Data accessed over RDMA using file-based protocols such as Microsoft's SMB Direct (SMB over RDMA), which is a new storage protocol in Windows Server® 2012, enables:
- Increased throughput: leverages the full throughput of high speed networks in which the network adapters coordinate the transfer of large amounts of data at line speed.
- Low latency: provides extremely fast responses to network requests, and, as a result, makes remote file storage feel as if it is directly attached block storage.
- Low CPU utilization: uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications.
Increased density of virtual machines on a single system within a data center is driving more I/O connectivity per physical server. Multiple 1 Gigabit Ethernet NICs and Fibre Channel HBAs are used in a single enterprise system to connect to the outside world for data exchange. Such hardware proliferation has increased I/O cost, convoluted cable management, and caused loss of I/O slots. Networking solutions with higher speeds that can run multiple protocols simultaneously are required for managing such network patterns.
Green computing is achieved in multiple ways, such as infrastructure consolidation, greater overall solution power efficiency, and optimized system utilization for various traffic patterns. It also includes embracing and adopting leading industry-based materials that are compliant with a green environment. Networking solutions that lower the system CPU utilization also have a direct impact on lowering data center utility bills.