Data Center Overview
Mellanox’s networking solutions based on InfiniBand, Ethernet or RoCE provide the best price, performance and power value proposition for network and storage I/O processing capabilities up to 56Gb/s or below. Advanced data centers can utilize 56Gb/s InfiniBand, 10/40GbE or RoCE (RDMA over Converged Ethernet) to consolidate I/O to a single wire and enable IT managers to deliver significantly higher application service levels, while reducing CapEx and OpEx related to I/O infrastructures. Mellanox provides a large pool of deployment, manageability and performance tools with networking products for a myriad of software environments to fine tune solutions as per customer requirements.
Mellanox 56Gb/s InfiniBand-based server adapters and switches provide fault-tolerant and unified connectivity between clustered database servers and native InfiniBand storage, allowing for very high efficiency of CPU and storage capacity usage. The result is 50% less hardware cost to achieve the same level of performance. In fact, starting from 2008, Mellanox InfiniBand has been prominently featured in Larry Ellison's keynote speech at Oracle OpenWorld Conferences regarding the Oracle Exadata database appliance. In addition, IBM's clustered DB2 pureScale has been certified to run over Mellanox InfiniBand and RoCE connectivity solutions and in late 2011 Microsoft announced that in their upcoming version of SQL Server they will add support for clustered SQL servers and system databases connected by InfiniBand and RoCE.
The continued expansion of business-critical information and rich content within extended enterprises continues to change the storage dynamic in a wide range of industries and organizations. This market trend drives the need for higher connectivity speed and the adaptation of the clustering architecture. Mellanox's networking solutions based on InfiniBand, Ethernet or RoCE provide the best price, performance and power value proposition for network and storage I/O processing capabilities up to 56Gb/s. Data accesse over RDMA using file-based protocols such as Microsoft's SMB Direct (SMB over RDMA) that is a new storage protocol in Windows Server® 2012 enables:
- Increased throughput: Leverages the full throughput of high speed networks where the network adapters coordinate the transfer of large amounts of data at line speed.
- Low latency: Provides extremely fast responses to network requests, and, as a result, makes remote file storage feel as if it is directly attached block storage.
- Low CPU utilization: Uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications.
Increased density of virtual machines on a single system within a data center is driving more I/O connectivity per physical server. Multiple Gigabit Ethernet NICs and Fibre Channel HBAs are used in a single enterprise system to connect to the outside world for data exchange. Such hardware proliferation has increased I/O cost, convoluted cable management and loss of I/O slots. Networking solutions with higher speeds which can run multiple protocols simultaneously are required for managing such network patterns.
Green computing is achieved in multiple ways such as infrastructure consolidation, lower overall solution power, and optimized system utilization for various traffic patterns. It also includes embracing and adopting leading industry-based materials which are complaint to a Green environment. Networking solutions, which lower the system CPU utilization, have a direct impact in lowering data center utility bills.