Yearly Archives: 2014

CROC’s Public Cloud Goes at InfiniBand Speed

CROC is the number one IT infrastructure creation company in Russia, and one of Russia’s top 200 private companies.   CROC has become the first public cloud service provider in Russia to adopt InfiniBand—a standard for high-speed data transfer between servers and storage. Migration to a new network infrastructure took approximately one month and resulted in up to a ten-fold increase in cloud service performance.

Blog CROC High Speedometer

Continue reading

Establishing a High Performance Cloud with Mellanox CloudX

When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI).  NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.

 

NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions.  NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).

Continue reading

Looking Forward To SC14: Interconnect Your Future

It’s almost time for SC14 in New Orleans, LA (November 17-20, 2014)! We have many exciting things planned for this annual conference.

 

SC14 Image

 

Stop by and visit Mellanox Technologies (booth #2939) to see the latest in our industry-leading FDR 56Gb/s InfiniBand and 40/56GbE solutions. Make sure to meet Mellanox HPC experts in person:

  • Michael Kagan, CTO
  • Dror Goldenberg, VP/Software Architecture
  • Gilad Shainer, VP/Marketing
  • Scot Schultz, Director – HPC & Technical Computing
  • Dr. Richard Graham, Senior Solutions Architect – Software

 

Mellanox Pavilion:

In our theater area, we will host presentations from leading server and storage OEMs, ISVs, end-users and academia.  These presenters will provide insight into the benefits and performance improvements when using low-latency FDR 56GB/s InfiniBand I/O technology.

 

Continue reading

End of an Era: OEM Dominance – Could it Be a Thing of the Past?

Guest Blog post by Giacomo Losio, Head of Technology – ProLabs

Original equipment manufacturers (OEM’s) have long dominated the optical components market but a new study now suggests that, as a result of tighter margins and greater competition, customers are putting quality and price before brand.  Is the era of the big OEM at an end?

When asked their views of the optical transceiver market at the European Conference on Optical Communications (ECOC) in Cannes, over 120 attendees revealed a trend which indicates a paradigm shift in attitudes.

Why do they buy? What they buy? What keeps them up at night? The answers may surprise you:

  • 98% of respondents ranked quality as one of their top three priorities when purchasing fibre optics
  • 89% of respondents placed price in the top three list of priorities
  • Yet only 14% of respondents even considered brand names to be a top three priority – or even a concern

Continue reading

HP OpenNFV Program for Telco Carriers

101314 HP logoLast week in Dusseldorf, Germany at the SDN & OpenFlow World Congress, HP announced that Mellanox Technologies has joined the HP OpenNFV Program as a technology partner to help enable carriers take advantage of Network Functions Virtualization (NFV) technology.

 

As the transition to software defined networking continues, service providers need robust, high-performance NFV solutions that deliver network-level performance, efficiency, scalability and flexibility.  This partnership with HP brings together a world class SDN platform that incorporates Mellanox’s portfolio of innovative 10/40 Gigabit Ethernet solutions to address rapidly evolving customer requirements for next generation transport networks. Continue reading

IBM Enterprise2014 Executive Summit: Turbo LAMP

This week at the IBM Enterprise2014 Executive Summit in Las Vegas, IBM unveiled new Power8 based infrastructure for cloud, data, web 2.0, and mobile engagement. Mellanox is being showcased as a key partner enabling critical platforms for IBM’s Big Data analytics, cloud, and software defined “Elastic Storage” solutions. The new Power8 platform incorporates Mellanox 40Gb Ethernet networking gear and a fully integrated Turbo LAMP (Linux, Apache, MySQL, PHP) software stack.

 

TurboLamp

 

This Turbo LAMP stack came about through a development partnership between IBM, Mellanox and several software vendors:

  • Canonical (Ubuntu Linux & Apache Web Server)
  • SkySQL (MariaDB/MySQL Database)
  • Zend (PHP)

The Turbo LAMP integration is important as it is the foundation for the most common ecommerce, content management systems, and Big Data analytics platforms. This integration allows customers to deliver optimized mobile and web applications while offering critical performance, scale and secure access that businesses need.

On Thursday October 9, our very own Matthew Sheard will be on stage at IBM Enterprise Conference providing details on the solution as outlined in this presentation.

Continue reading

Mellanox Software 3D Hackathon

Hackathon Pic1After the overwhelming success of Hackathon 2014 this past January, Mellanox Israel now presents 3D Hackathon:  Develop, Debug, Deploy. This contest is designed to encourage innovation and teamwork, while introducing new 3D software technologies and features in a very quick turnaround time.

 

Mellanox Israel employees were invited to submit proposals for new software projects related to existing Mellanox technologies and to form a team of up to 3 people to develop a proposal. More than 20 unique software proposals were submitted.  The steering committee evaluated and selected 16 proposals for consideration into the final competition.  The top 3 teams were awarded prizes.  All teams were asked to present working demos.

Continue reading

Introduction to InfiniBand

InfiniBand is a network communications protocol that offers a switch-based fabric of point-to-point bi-directional serial links between processor nodes, as well as between processor nodes and input/output nodes, such as disks or storage. Every link has exactly one device connected to each end of the link, such that the characteristics controlling the transmission (sending and receiving) at each end are well defined and controlled.

 

InfiniBand creates a private, protected channel directly between the nodes via switches, and facilitates data and message movement without CPU involvement with Remote Direct Memory Access (RDMA) and Send/Receive offloads that are managed and performed by InfiniBand adapters. The adapters are connected on one end to the CPU over a PCI Express interface and to the InfiniBand subnet through InfiniBand network ports on the other. This provides distinct advantages over other network communications protocols, including higher bandwidth, lower latency, and enhanced scalability.

Figure 1: Basic InfiniBand Structure
Figure 1: Basic InfiniBand Structure

Continue reading

Dell Announces 13th Generation PowerEdge Servers

Dell announced the next generation of PowerEdge servers along with a future vision that includes continued focus on application performance along with new Near Field Communications (NFC) systems management monitoring for servers along with continued support for software-defined storage.  We are pleased to see this new Dell PowerEdge server line and the inclusion of our 10/40GbE NICs.

PowerEdge 13G Server Family
As big data analytics become more in demand by enterprise, organizations need to be able to sort and analyze vast amounts of data to guide business decisions. Large companies using ERP solutions require intensive I/O bandwidth to process multiple transactions.  Using the latest processors, enhanced in-server Flash storage and Mellanox 10Gb Ethernet NICs to process more in less time, the Dell family of PowerEdge servers will enable a more seamless ERP experience.

Last but not least, the new in-server storage technology allows customers to accelerate the most important data by offering high performance with NVMe Express Flash storage and deployment of Dell Fluid Cache for SAN. This technology is also ideal for high IOPS requirements in VDI environments with thousands of high performance users, while optimizing your cost per virtual desktop. – Nicolas Cuendent, Dell, September 8, 2014

Available now from Dell, Mellanox’s ConnectX-3 Pro (PEC620) and ConnectX-3 10/40GbE NICs with RDMA over Converged Ethernet (RoCE) and overlay network offloads offer optimized application latency and performance while maintaining extremely low system power consumption.

RESOURCES:

Road to 100Gb/sec…Innovation Required! (Part 3 of 3)

Physical Layer Innovation: Silicon Photonics

So in two previous posts, I discussed the innovations required at the transport, network, and link layer of the communications protocol stack to take advantage of 100Gb/s networks . Let’s now talk about the physical layer. A 100Gb/sec signaling rate implies a 10ps symbol period.

Frankly, this is just not possible on a commercial basis with current technology. Neither is it possible on copper nor on optical interfaces. At this rate the electrical and optical pules just can’t travel any useful distance without smearing into each other and getting corrupted.

So there are two possible solutions.  The first is to use 4 parallel connections each running @25Gb/sec. The second is to use a single channel with a 25Gb/sec symbol rate but to send four bits per symbol. This can be done either through techniques like Pulse Amplitude Modulation (PAM4) or optically by sending four different colors of light on a single fiber using Wavelength Division Multiplexing (WDM) techniques. Continue reading