Yearly Archives: 2014

The Benefits of Leaning Into the Big Data

Guest post by Alex Henthorn-Iwane, QualiSystems

Big data is for real, but its places heavy demands on IT teams, who have to pull together and provision cloud infrastructure, then offer big data application deployments with validated performance to meet pressing business decision timelines.  QualiSystems is partnering with Mellanox to simplify big data deployments over any cloud infrastructure, enabling IT teams to meet line of business needs while reducing operational costs.

Quali Systems cutcaster-903282828-Big-data-small

Continue reading

Open MLAG: The Road to the Open Ethernet Switch System

Making another step towards enabling a world of truly open Ethernet switches, Mellanox recently became the first vendor to release as open source,  implementation of Multi Chassis Link Aggregation Group, or as it is more commonly known – MLAG.

Mellanox is involved and contributes to other open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. Mellanox is the first and only vendor to open-source its switch SDK API. Mellanox is also a leading member and contributor of the Open Compute Project, where it provides NICs, switches and software.

Continue reading

Accelerating Genomic Analysis

One of the biggest catchphrases in modern science is Human Genome–the DNA coding that largely pre-determines who we are and many of our medical outcomes. By mapping and analyzing the structure of the human genetic code, scientists and doctors have already started to identify the causes of many diseases and to pinpoint effective treatments based on the specific genetic sequence of a given patient. With the advanced data that such analysis provides, doctors can offer more targeted strategies for potentially terminal patients at times when no other clinically relevant treatment options exist.

Brian Klaff 072314 Dell Genome
Continue reading

Mellanox Collaborates with Dell to Maximize Application Performance in Virtualized Data Centers

Dell Fluid Cache for SAN is enabled by ConnectX®-3 10/40GbE Network Interface Cards (NICs) with Remote Direct Memory Access (RDMA). The Dell Fluid Cache for SAN solution reduces latency and improves I/O performance for applications such as Online Transaction Processing (OLTP) and Virtual Desktop Infrastructure (VDI).

Dell lab tests have revealed that Dell Fluid Cache for SAN can reduce the average response time by 99 percent and achieve four times more transactions per second with a six-fold increase in concurrent users**.

LJ-Miller-071714
Continue reading

Enabling Application Performance in Data Center Environments

Ethernet switches are simple: they need to move packets around from port to port based on the attributes of each packet. There are plenty of switch vendors from which to choose. Differentiating in this saturated market is the aspiration of each vendor.

 

Mellanox Technologies switches are unique in this market. Not just “yet another switch” but a design based on a self-built switching ASIC and a variety of 1RU switches. These switches excel in performance compared to any other switch offered in the market. Being first and (still) the only vendor with a complete end-to-end 40GbE solution, Mellanox provides a complete interconnect solution and the ability to achieve the highest price-performance ratio.

Continue reading

Using Graph Database with High Performance Networks

Companies today are finding that the size and growth of stored data is becoming overwhelming. As the databases grow, the challenge is to create value by discovering insights and connections in the big databases in as close to real time as possible. In the recently published whitepaper, Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks we describe a combination of high performance networking and graph base and analytics technologies which offers a solution to this need.

 

185880629

 

Each of the examples in the paper is based on an element of a typical analysis solution. In the first example, involving Vertex Ingest Rate shows the value of using high performance equipment to enhance real-time data availability. Vertex objects represent nodes in a graph, such as Customers, so this test is representative of the most basic operation: loading new customer data into the graph. In the second example, Vertex Query Rate highlights the improvement in the time needed to receive results, such as finding a particular customer record or a group of customers.

 

The third example, Distributed graph navigation processing starts at a Vertex and explores its connections to other Vertices. This is representative of traversing social networks, finding optimal transportation or communications routes and similar prob­lems. The final example, Task Ingest Rate shows the performance improvement when loading the data connecting each of the vertices. This is similar to entering orders for products, transit times over a communications path and so on.

 

Each of these elements is an important part of a Big Data analysis solution. Taken together, they show that InfiniteGraph can be made significantly more effective when combined with Mellanox interconnect technology.

 

Resources: Mellanox Web 2.0 Solutions

ISC 2014 Student Cluster Challenge: EPCC Record-Breaking Cluster

The University of Edinburgh’s entry into the ISC 2014 Student Cluster Competition, EPCC, has been awarded first place in the LINPACK test. The EPCC team harnessed Boston’s HPC cluster to smash the 10Tflop mark for the first time – shattering the previous record of 9.27Tflops set by students at ASC14 earlier this month. The team recorded a score of 10.14Tflops producing 3.38 Tflops/kW which would achieve a rank of #4 in the Green500, a list of the most energy efficient supercomputers in the world.

 

Members:Chenhui Quan, Georgios Iniatis, Xu Guo, Emmanouil Farsarakis, Konstantinos MouzakitisPhoto Courtesty:  HPC Advisory Council
Members: Chenhui Quan, Georgios Iniatis, Xu Guo,
Emmanouil Farsarakis, Konstantinos Mouzakitis
Photo Courtesy: HPC Advisory Council

 

This achievement was made possible thanks to the provisioning of a high performance, liquid cooled GPU cluster by Boston. The system consisted on four 1U Supermicro servers, each comprising of two Intel® Xeon™ ‘Ivy Bridge’ processors and two NVIDIA® K40 Tesla GPUs, and Mellanox FDR 56Gb/s InfiniBand adapters, switches and cables.

 

Continue reading

Deploying Hadoop on Top of Ceph, Using FDR InfiniBand Network

We recently posted a whitepaper on “Deploying Ceph with High Performance Networks” using Ceph as a block storage device.  In this post, we review the advantages of using CephFS as an alternative for HDFS.

Hadoop has become a leading programming framework in the big data space. Organizations are replacing several traditional architectures with Hadoop and use it as a storage, data base, business intelligence and data warehouse solution. Enabling a single file system for Hadoop and other programming frameworks benefits users who need dynamic scalability of compute and or storage capabilities.

Continue reading

How RDMA Increases Virtualization Performance Without Compromising Efficiency

Virtualization has already proven itself to be the best way to improve data center efficiency and to simplify management tasks. However, getting those benefits requires using the various services that the Hypervisor provides.  This introduces delay and results in longer execution time, compared to running over a non-virtualized data center (native infrastructure). This drawback hasn’t been hidden from the eyes of the high-tech R&D community seeking ways to enjoy the advantages of virtualization with a minimal effect on performance.

One of the most popular solutions today to enable native performance is to use the SR-IOV (Single Root IO Virtualization) mechanism which bypasses the Hypervisor and enables a direct link between the VM to the IO adapter. However, although the VM gets the native performance, it loses all of the Hypervisor services.  Important features like high availability (HA) or VM migration can’t be done easily.  Using SR-IOV requires that the VM must have the specific NIC driver (that he communicates with) which results in more complicated management since IT managers can’t use the common driver that runs between the VM to the Hypervisor.

As virtualization becomes a standard technology, the industry continues to find ways to improve performance without losing benefits, and organizations have started to invest more in the deployment of RDMA enabled interconnects in virtualized data centers. In one my previous blogs, I discussed the proven deployment of RoCE (RDMA over Converged Ethernet) in Azure using SMB Direct (SMB 3.0 over RDMA) enabling faster access to storage.

Continue reading