Yearly Archives: 2014

Dell Announces 13th Generation PowerEdge Servers

Dell announced the next generation of PowerEdge servers along with a future vision that includes continued focus on application performance along with new Near Field Communications (NFC) systems management monitoring for servers along with continued support for software-defined storage.  We are pleased to see this new Dell PowerEdge server line and the inclusion of our 10/40GbE NICs.

The Next Generation Family of Dell PowerEdge Servers
The Next Generation Family of Dell PowerEdge Servers

As big data analytics become more in demand by enterprise, organizations need to be able to sort and analyze vast amounts of data to guide business decisions. Large companies using ERP solutions require intensive I/O bandwidth to process multiple transactions.  Using the latest processors, enhanced in-server Flash storage and Mellanox 10/40Gb Ethernet NICs to process more in less time, the Dell family of PowerEdge servers will enable a more seamless ERP experience.

Last but not least, the new in-server storage technology allows customers to accelerate the most important data by offering high performance with NVMe Express Flash storage and deployment of Dell Fluid Cache for SAN. This technology is also ideal for high IOPS requirements in VDI environments with thousands of high performance users, while optimizing your cost per virtual desktop. – Nicolas Cuendent, Dell, September 8, 2014

Available now from Dell, Mellanox’s ConnectX-3 Pro (PEC620) and ConnectX-3 10/40GbE NICs with RDMA over Converged Ethernet (RoCE) and overlay network offloads offer optimized application latency and performance while maintaining extremely low system power consumption.

 

RESOURCES:

Road to 100Gb/sec…Innovation Required! (Part 3 of 3)

Physical Layer Innovation: Silicon Photonics

So in two previous posts, I discussed the innovations required at the transport, network, and link layer of the communications protocol stack to take advantage of 100Gb/s networks . Let’s now talk about the physical layer. A 100Gb/sec signaling rate implies a 10ps symbol period.

Frankly, this is just not possible on a commercial basis with current technology. It is not practically feasible on copper nor on optical interfaces. At this rate the electrical and optical pulses just can’t travel any useful distance without smearing into each other and getting corrupted.

So there are two possible solutions.  The first is to use 4 parallel connections each running @25Gb/sec. The second is to use a single channel with a 25Gb/sec symbol rate but to send four bits per symbol. This can be done either through techniques like Pulse Amplitude Modulation (PAM4) or optically by sending four different colors of light on a single fiber using Wavelength Division Multiplexing (WDM) techniques. Continue reading

Recap: VMworld 2014 – San Francisco, CA

It was a busy time last week in San Francisco!  During VMworld 2014, we announced a collaboration with VMware and Micron to enable highly efficient deployments of Virtual Desktop Infrastructure. The VDI deployment will be a combination of Mellanox’s 10GbE interconnect, VMware’s Virtual SAN (VSAN) and Micron’s SSDs.  The joint solution creates a scalable infrastructure while minimizing the cost per virtual desktop user. The new solution will consist of three servers running VMware vSphere and Virtual SAN each with one Mellanox ConnectX-3 10GbE NIC, two Micron 1.4TB P420m PCIe SSDs and six HDDs.

 

vmworld14 booth presentation2

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 2 of 3)

Network and Link Layer Innovation: Lossless Networks

In a previous post, I discussed that innovations are required to take advantage of 100Gb/s at every layer of the communications protocol stack networks – starting off with the need for  RDMA at the transport layer. So now let’s look at the requirements at the next two layers of the protocol stack. It turns out that RDMA transport requires innovation at the Network and Link layers in order to provide a lossless infrastructure.

‘Lossless’ in this context does not mean that the network can never lose a packet, as some level of noise and data corruption is unavoidable. Rather by ‘lossless’ we mean a network that is designed such that it avoids intentional, systematic packet loss as a means of signaling congestion. That is packet loss is the exception rather than the rule.

Priority Flow Control is similar to a traffic light and enables lossless networks
Priority Flow Control is similar to a traffic light and enables lossless networks

Lossless networks  can be achieved by using priority flow control at the link layer which allows packets to be forwarded only if there is buffer space available in the receiving device. In this way buffer overflow and packet loss is avoided and the network becomes lossless.

In the Ethernet world, this is standardized as 802.1 QBB Priority Flow Control (PFC) and is equivalent to putting stop lights at each intersection. A packet on a given priority class can only be forwarded when the light is green.

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 1 of 3)

Transport Layer Innovation: RDMA

During my undergraduate days at UC Berkeley in the 1980’s, I remember climbing through the attic of Cory Hall running 10Mbit/sec coaxial cables to professors’ offices. Man, that 10base2 coax was fast!! Here we are in 2014 right on the verge of 100Gbit/sec networks. Four orders of magnitude increase in bandwidth is no small engineering feat, and achieving 100Gb/s network communications requires innovation at every level of the seven layer OSI model.

To tell you the truth, I never really understood the top three layers of this OSI model: I prefer the TCP/IP model which collapses all of them into a single “Application” layer which makes more sense. Unfortunately, it also collapses the Link layer and the Physical layer and I actually don’t think this makes sense to combine these two.  I like to build my own ‘hybrid’ model that collapses the top three layers into an Application layer but allows you to consider the Link and Physical layers separately.

Kevin D Blog 081814 Fig1

It turns out that a tremendous amount of innovation is required in these bottom four layers to achieve effective 100Gb/s communications networks. The application layer needs to change as well to fully take advantage of 100Gb/s networks.   For now we’ll focus on the bottom four layers. Continue reading

The Benefits of Leaning Into the Big Data

Guest post by Alex Henthorn-Iwane, QualiSystems

Big data is for real, but its places heavy demands on IT teams, who have to pull together and provision cloud infrastructure, then offer big data application deployments with validated performance to meet pressing business decision timelines.  QualiSystems is partnering with Mellanox to simplify big data deployments over any cloud infrastructure, enabling IT teams to meet line of business needs while reducing operational costs.

Quali Systems cutcaster-903282828-Big-data-small

Continue reading

Open MLAG: The Road to the Open Ethernet Switch System

Making another step towards enabling a world of truly open Ethernet switches, Mellanox recently became the first vendor to release as open source,  implementation of Multi Chassis Link Aggregation Group, or as it is more commonly known – MLAG.

Mellanox is involved and contributes to other open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. Mellanox is the first and only vendor to open-source its switch SDK API. Mellanox is also a leading member and contributor of the Open Compute Project, where it provides NICs, switches and software.

Continue reading

Accelerating Genomic Analysis

One of the biggest catchphrases in modern science is Human Genome–the DNA coding that largely pre-determines who we are and many of our medical outcomes. By mapping and analyzing the structure of the human genetic code, scientists and doctors have already started to identify the causes of many diseases and to pinpoint effective treatments based on the specific genetic sequence of a given patient. With the advanced data that such analysis provides, doctors can offer more targeted strategies for potentially terminal patients at times when no other clinically relevant treatment options exist.

Brian Klaff 072314 Dell Genome
Continue reading