Monthly Archives: December 2013

Mellanox Video Recap of Supercomputing Conference 2013 – Denver

We want thank everyone for joining us at SC13 in Denver, Colorado last month. We hope you had a chance to become more familiar with our end-to-end interconnect solutions for HPC.

 

Check out the  videos of the presentations given during the Mellanox Evening Event, held on November 20, 2013 in the Sheraton Denver Downtown Hotel. The event was keynoted by Eyal Waldman, President and CEO of Mellanox Technologies:

Continue reading

Symantec’s Clustered File Storage over InfiniBand

Last week (on December 9th, 2013), Symantec announced the GA of their clustered file storage (CFS). The new solution enables customers to access mission critical data and applications 400% faster than traditional Storage Area Networks (SANs) at 60% of the cost.

Faster is cheaper! Sounds like magic! How they are doing it?

Try to understand the “magic”:  It is important to understand the advantages that using SSD with high performance interconnect enable in the modern scale-out (or clustered) storage systems.   Up to now, SAN-based storage has typically been used to increase performance and provide data availability for multiple applications and clustered systems. However, with the recent high-performance applications demand, SAN vendors are trying to add SSD into the storage array itself to provide higher bandwidth and lower latency response.

Since SSDs offer an incredibly high number of IOPS and bandwidth, it is important to use the right interconnect technology and to avoid bottlenecks associated with access to storage. Old fabric, like Fibre Channel (FC) cannot cope with faster pipe demands, as 8Gb/s (or even 16Gb/s) bandwidth performance is not good enough to satisfy the applications requirements.  While 40Gb/s Ethernet may look like an alternative, InfiniBand (IB) currently supports up to 56Gb/s, with a roadmap to 100Gb/s in next year.

Continue reading

CloudNFV Proof-of-Concept Approved by ETSI ISG

Mellanox is a CloudNFV integration partner providing ConnectX-3 and ConnectX-3 PRO 10/40GbE NIC on Dell Servers

The CloudNFV team will be starting PoC execution in mid-January, reporting on our results beginning of February, and contributing four major documents to the ISG’s process through the first half of 2014.” said Tom Nolle, President of CIMI Corporation, Chief Architect of CloudNFV in his recent related blog.and enabling active high performance data center.  Telefonica and Sprint have agreed to sponsor

tomnolle_new

the CloudNFV PoC.

We’re already planning additional PoCs, some focusing on specific areas and developed by our members and some advancing the boundaries of NFV into the public and private cloud and into the world of pan-provider services and global telecommunications.

Mellanox server and storage interconnect enable telecom data plane virtual network functions with near bare metal server performance in OpenStack Cloud environment through integration to NFV Orchestration and SDN platforms.

Read more:   The CloudNFV Proof-of-Concept Was Approved by the ETSI ISG!

eran bello
Author: As a Director of Business Development at Mellanox, Eran Bello handles the business, solutions and product development and strategy for the growing Telecom and Security markets. Prior to joining Mellanox, Eran was Director of Sales and Business Development at Anobit Technologies where he was responsible for the development of the ecosystem for Anobit new Enterprise SSD business as well as portfolio introduction and business engagements with key Server OEMs, Storage Solution providers and mega datacenters. Earlier on Eran was VP of Marketing and Sales for North and Central America at Runcom Technologies, the first company to deliver Mobile WiMAX/4G End to End solution and was a member of the WiMAX/4G Forum.

The Train Has Left the Station, Open Ethernet is Happening

Authored by:   Amit Katz – Sr. Director, Product Management

Customers are tired of paying huge sums of money for Ethernet switches for no good reason. At some point, OpenFlow seemed like the way to change the networking world, but various factors such as overlay networks, changing market interests, and other unforeseen developments, it is hard to view OpenFlow today as a game-changer. While it remains a very important technology and provides a valuable mean of implementing certain functionalities, it has not created a revolution in the networking industry.

 

The real revolution that is occurring today is based on a combination of the momentum gained by the Open Compute Platform and the increasing number of switch software and hardware suppliers. Initiatives to open the switch, such as Mellanox’s Open Ethernet that was announced earlier this year, have placed us on the right path to bringing networking to where servers are today: affordable, open, and software-defined.

 

But is this revolution all about saving on cost? Not at all – cost is important but flexibility, openness, and the freedom to choose are equally important. One of the key elements in enabling vendor selection elasticity is Open Network Install Environment (ONIE), which decouples the switch hardware from its software, enabling vendors to provide something very similar to what we see in the server world: hardware without an Operating System. That means customer can buy a server with many ports and install their choice of OS on top of it. In the event that the customer wants to change the OS, the lion’s share of the investment (the hardware piece) is protected.

Continue reading

Mellanox Congratulates Yarden Gerbi

Mellanox congratulates Yarden Gerbi for winning the Gold medal in the recent Israeli Judo competition.  Mellanox will sponsor Gerbi throughout her training toward the 2016 Rio Olympic games.  Yarden Gerbi is the 2013 Judo World Champion in the under 63kg (139 lbs.) category and ranked first worldwide.  Mellanox will sponsor her as she attempts to qualify for and compete in the Olympic Games in Rio de Janeiro, Brazil.

 

Photo Credit:  Oron Kochman
Photo Credit: Oron Kochman

Continue reading

Mellanox FDR 56Gb/s InfiniBand Helps Lead SC’13 Student Cluster Competition Teams to Victory

Mellanox’s end-to-end FDR 56Gb/s InfiniBand solutions helped lead The University of Texas at Austin to victory at the SC Student Cluster Competition’s Standard Track during SC’13. Utilizing Mellanox’s FDR InfiniBand solutions, The University of Texas at Austin achieved superior application run-time and sustained performance within a 26-amp of 120-volt power limit, allowing them to complete workloads faster while achieving top benchmark performance. Special recognition was also provided to China’s National University of Defense Technology (NUDT), which through the use of Mellanox’s FDR 56Gb/s InfiniBand, won the award for highest LINPACK performance.

 

Held as part of HPC Interconnections, the SC Student Cluster Competition is designed to introduce the next generation of students to the high-performance computing community. In this real-time, non-stop, 48-hour challenge, teams of undergraduate students assembled a small cluster on the SC13 exhibit floor and raced to demonstrate the greatest sustained performance across a series of applications. The winning team was determined based on a combined score for workload completed, benchmark performance, conference attendance, and interviews.

Continue reading