Category Archives: Releases

HP updates server, storage and networking line-ups

 

HP updated its enterprise hardware portfolio with the most notable addition being networking devices that combined wired and wireless infrastructure to better manage bring-your-own-device policies.One of those  highlights is the Mellanox SX1018 HP Ethernet switch, which lowers port latency and improves downlinks.

 

The Mellanox SX1018HP Ethernet Switch is the highest-performing Ethernet fabric solution in a blade switch form factor. It delivers up to 1.36Tb/s of non-blocking throughput perfect for High-Performance Computing, High Frequency Trading and Enterprise Data Center- applications.

 

Utilizing the latest Mellanox SwitchX ASIC technology, the SX1018HP is an ultra-low latency switch that is ideally suited as an access switch providing Infiniband like performance with sixteen 10Gb/40Gb server side downlinks and eighteen 40Gb QSFP+ uplinks to the core with port to port latency as low as 230nS.

 

The Mellanox SX1018HP Ethernet Switch has a rich set of Layer 2 networking and security features and supports faster application performance and enhanced server CPU utilization with RDMA over Converged Ethernet (RoCE), making this switch the perfect solution for any high performance Ethernet network.

 

Mellanox SX1018HP Ethernet Switch

 

HP is the first to provide 40Gb downlinks to each blade server enabling InfiniBand-like performance in an Ethernet blade switch. Another industry first, the low-latency HP SX1018 Ethernet Switch provides the lowest port to port latency of any blade switch, more than four times faster than previous switches

 

When combined with the space, power and cooling benefits of blade servers, the Mellanox SX1018HP Ethernet Blade Switch provides the perfect network interface for Financial applications and high performance clusters.

 

Breaking the Cloud “I/O Barrier”

Mellanox and LINBIT just announced a collaboration with Logicworks.

Together, the companies are working to develop a high-performance replication system for Logicworks’ customers. LINBIT DRBD open source technology combined with the InfiniBand fabric from Mellanox will lower costs and make it possible to achieve unprecedented levels of input/output (I/O) performance, leading to improved cloud-based storage management and disaster recovery capabilities. The adoption of InfiniBand for the cloud-based system will provide Logicworks’ customers with the unparalleled performance that is critical for hosting latency sensitive applications.

“By utilizing both LINBIT’s DRBD technology and Mellanox’s InfiniBand interconnects, Logicworks’ customers will be able to take their cloud-based applications to the next level,” said Bart Grantham, R&D vice president, Logicworks. “We are eager to build on our relationship with LINBIT and excited to be among the first in the industry to offer such a solution to our customers.”

Unleashing Performance, Scalability and Productivity with Intel Xeon 5500 Processors “Nehalem”

The industry has been talking about it for a long time, but on March 30th, it was officially announced. The new Xeon 5500 “Nehalem” platform from Intel has introduced a totally new concept of server architecture for Intel-based platforms. The memory has moved from being connected to the chipset to be connected directly to the CPU, and the memory speed has increased. More importantly, PCI-Express (PCIe) Gen2 can now be fully utilized to unleash new performance and efficiency levels from Intel-based platforms. PCIe Gen2 is the interface between the CPU and memory to the networking that connects servers together to form compute clusters. With PCIe Gen2 now being integrated in compute platforms from the majority of OEMs, more data can be sent and received in a single server or blade. This means that applications can exchange data faster and complete simulations much faster, bringing a competitive advantage to end-users. In order to feed the PCIe Gen2, one needs to have a big pipe for his networking solutions, and this is what InfiniBand 40Gb/s brings to the table. No surprise that multiple server OEMs have announced the availability of 40Gb/s InfiniBand in conjunction with Intel announcement (for example HP and Dell).

 

I have been testing several applications to compare the performance benefits of Intel Xeon 5500 processors and Mellanox end-to-end 40Gb/s networking solutions. One of those applications was the Weather Research and Forecasting (WRF) application, widely used around the world. With Intel Xeon-5500-based servers and Mellanox 40Gb/s ConnectX InfiniBand adapters and MTS3600 36-port 40Gb/s InfiniBand switch system, we witnessed a 100% increase in performance and productivity over previous Intel platforms.

With a digital media rendering application – Direct Transport Compositor, we have seen a 100% increases in frames per second delivery, while increasing the screen anti-aliasing at the same time. Other applications have shown similar level of performance and productivity boost as well.

 

The reasons for the new performance levels are the decrease in the latency (1usec) and the huge increase in throughput (more than 3.2GB/s throughput uni-directional on more than 6.5GB/s bi-directional on a single InfiniBand port). With the increase in the number of CPU cores, and new server architecture, bigger pipes in and out from the servers are required in order to keep the system balanced and to avoid creating artificial bottlenecks. Another advantage for InfiniBand is its ability to use RDMA and transfer data directly to and from the CPU memory, without the involvement of the CPU in the data transfer activity. This mean one thing only – more CPU cycles can be dedicated to the applications!

 

Gilad Shainer

Director, HPC Marketing

Giving Back to Our Community

One of Mellanox’s strongest values as a company is a commitment to give back to the community. Continuing our tradition of giving back to the community, we announced today the donation of $160,000. This amount has been divided among fourteen charities and education programs within Israel and the United States: Arazim Elementary School, Baldwin Elementary School, Foothill High School, Harvest Park Middle School, Highlands Elementary School, Leukemia & Lymphoma Society, Oakland’s Children’s Hospital & Research Center, Ort Israel, Simonds Elementary School, Susan G. Komen for the Cure, Twelve Bridges Elementary School, and Williams Wins Foundation.

 

These organizations, handpicked by our employees, provide an excellent opportunity to create awareness and raise funds for the advancement of invaluable health and education programs for the community. We are proud to be supporters of those efforts, especially during these times.

I/O Agnostic Fabric Consolidation

Today, we announced one of the most innovative and strategic product – BridgeX, an I/O agnostic fabric consolidation silicon and you drop that in a 1U enclosure it becomes a full fledged system (BX4000)

Few years back we defined our product strategy to deliver a single-wire I/O consolidation to data centers.  The approach was not to support some random transports to deliver I/O consolidation but use transports that the data centers are accustomed to for the smooth running of their businesses.  ConnectX, an offspring of this strategy supports InfiniBand, Ethernet and FCoE.    ConnectX consolidates the I/O on the adapter but the data still has to go through different access switches.   BridgeX, the second offspring of our product strategy supports a stateless gateway functionality which allows for access layer consolidation.   BridgeX provides the Data Centers to innovate and remove two fabrics by deploying a single InfiniBand fabric which can support several virtualized GigE’s, 10GigE’s, 2, 4 or 8Gig FC in a single physical server.  BridgeX with its software counterpart BridgeX Manager that runs alongside on a CPU delivers management functionality for vNICs and vHBAs for both virtual OS (VMWare, XEN, Hyper-V) and non-virtual OS’s (Linux and Windows).

Virtual I/Os and BridgeX a stateless gateway implementation provides packet / frame integrity.  Virtual I/O drivers on the host adds InfiniBand headers to the Ethernet or Fibre Channel frames and the gateway (BridgeX) removes the headers and delivers it on the appropriate LAN or SAN port.  Similarly, the gateway (BridgeX) adds the InfiniBand headers to the packets / frames that it receives from the LAN / SAN side and sends it to the host which removes the encapsulation and delivers packet / frame to the application.  This simple, easy, and innovative implementation saves not only deployment costs but also saves energy and cooling costs significantly.

We briefed several analysts the last few weeks and most of them concurred that the product is innovative and in times like this a BridgeX based solution can cut costs, speed-up deployments and improve performance.

TA Ramanujam (TAR)
tar@mellanox.com