All posts by Cecelia Taylor

About Cecelia Taylor

Cecelia has served as the Sr. Social Media Manager for Mellanox since 2013. She previously worked at Cisco & ZipRealty managing social media marketing, publishing and metrics. Prior to her career in social media, she worked in audience development marketing for B2B publishing. She has a BA from Mills College and resides in the SF East Bay. Follow her on Twitter: @CeceliaTaylor

CloudNFV Proof-of-Concept Approved by ETSI ISG

Mellanox is a CloudNFV integration partner providing ConnectX-3 and ConnectX-3 PRO 10/40GbE NIC on Dell Servers

The CloudNFV team will be starting PoC execution in mid-January, reporting on our results beginning of February, and contributing four major documents to the ISG’s process through the first half of 2014.” said Tom Nolle, President of CIMI Corporation, Chief Architect of CloudNFV in his recent related blog.and enabling active high performance data center.  Telefonica and Sprint have agreed to sponsor

tomnolle_new

the CloudNFV PoC.

We’re already planning additional PoCs, some focusing on specific areas and developed by our members and some advancing the boundaries of NFV into the public and private cloud and into the world of pan-provider services and global telecommunications.

Mellanox server and storage interconnect enable telecom data plane virtual network functions with near bare metal server performance in OpenStack Cloud environment through integration to NFV Orchestration and SDN platforms.

Read more:   The CloudNFV Proof-of-Concept Was Approved by the ETSI ISG!

eran bello
Author: As a Director of Business Development at Mellanox, Eran Bello handles the business, solutions and product development and strategy for the growing Telecom and Security markets. Prior to joining Mellanox, Eran was Director of Sales and Business Development at Anobit Technologies where he was responsible for the development of the ecosystem for Anobit new Enterprise SSD business as well as portfolio introduction and business engagements with key Server OEMs, Storage Solution providers and mega datacenters. Earlier on Eran was VP of Marketing and Sales for North and Central America at Runcom Technologies, the first company to deliver Mobile WiMAX/4G End to End solution and was a member of the WiMAX/4G Forum.

The Train Has Left the Station, Open Ethernet is Happening

Authored by:   Amit Katz – Sr. Director, Product Management

Customers are tired of paying huge sums of money for Ethernet switches for no good reason. At some point, OpenFlow seemed like the way to change the networking world, but various factors such as overlay networks, changing market interests, and other unforeseen developments, it is hard to view OpenFlow today as a game-changer. While it remains a very important technology and provides a valuable mean of implementing certain functionalities, it has not created a revolution in the networking industry.

 

The real revolution that is occurring today is based on a combination of the momentum gained by the Open Compute Platform and the increasing number of switch software and hardware suppliers. Initiatives to open the switch, such as Mellanox’s Open Ethernet that was announced earlier this year, have placed us on the right path to bringing networking to where servers are today: affordable, open, and software-defined.

 

But is this revolution all about saving on cost? Not at all – cost is important but flexibility, openness, and the freedom to choose are equally important. One of the key elements in enabling vendor selection elasticity is Open Network Install Environment (ONIE), which decouples the switch hardware from its software, enabling vendors to provide something very similar to what we see in the server world: hardware without an Operating System. That means customer can buy a server with many ports and install their choice of OS on top of it. In the event that the customer wants to change the OS, the lion’s share of the investment (the hardware piece) is protected.

Continue reading

Mellanox Congratulates Yarden Gerbi

Mellanox congratulates Yarden Gerbi for winning the Gold medal in the recent Israeli Judo competition.  Mellanox will sponsor Gerbi throughout her training toward the 2016 Rio Olympic games.  Yarden Gerbi is the 2013 Judo World Champion in the under 63kg (139 lbs.) category and ranked first worldwide.  Mellanox will sponsor her as she attempts to qualify for and compete in the Olympic Games in Rio de Janeiro, Brazil.

 

Photo Credit:  Oron Kochman
Photo Credit: Oron Kochman

Continue reading

Mellanox FDR 56Gb/s InfiniBand Helps Lead SC’13 Student Cluster Competition Teams to Victory

Mellanox’s end-to-end FDR 56Gb/s InfiniBand solutions helped lead The University of Texas at Austin to victory at the SC Student Cluster Competition’s Standard Track during SC’13. Utilizing Mellanox’s FDR InfiniBand solutions, The University of Texas at Austin achieved superior application run-time and sustained performance within a 26-amp of 120-volt power limit, allowing them to complete workloads faster while achieving top benchmark performance. Special recognition was also provided to China’s National University of Defense Technology (NUDT), which through the use of Mellanox’s FDR 56Gb/s InfiniBand, won the award for highest LINPACK performance.

 

Held as part of HPC Interconnections, the SC Student Cluster Competition is designed to introduce the next generation of students to the high-performance computing community. In this real-time, non-stop, 48-hour challenge, teams of undergraduate students assembled a small cluster on the SC13 exhibit floor and raced to demonstrate the greatest sustained performance across a series of applications. The winning team was determined based on a combined score for workload completed, benchmark performance, conference attendance, and interviews.

Continue reading

Mellanox at SuperComputing Conference 2013 – Denver, CO

 

Attending the SC13 conference in Denver next week?

gfx_02086.jpg

Yes? Be sure to stop by the Mellanox booth at booth #2722 and check out the latest products, technology demonstrations, and FDR InfiniBand performance with Connect-IB!   We have a long list of theater presentations with our partners at the Mellanox booth. We will have giveaways at every presentation and a lucky attendee will go home with a new Apple iPad3 Mini at the end of each day!

Don’t forget to sign up for Mellanox Special Evening Event During SC13 on Wednesday night.  Register here:  http://www.mellanox.com/sc13/event.php

Location
Sheraton Denver Downtown Hotel
Plaza Ballroom
1550 Court Place
Denver, Colorado 80202
Phone: (303) 893-3333
  Map It  

Time:
Wednesday, November 20th
7:00PM – 10:00PM

Also download the Print ‘n Fly guide to SC13 in Denver from insideHPC!

print'nfly cover

Finally, come to hear from our experts in the SC13 sessions:

 

Speaker: Gilad Shainer, VP Marketing; Richard Graham, Sr. Solutions Architect

Title: “OpenSHMEM BoF”

Date: Wednesday, November 20, 2013

Time: 5:30PM – 7:00PM

Room: 201/203

 

Speaker: Richard Graham, Sr. Solutions Architect

Title: “Technical Paper Session Chair: Inter-Node Communication

Date: Thursday, November 21, 2013

Time: 10:30AM – 12:00PM

Room: 405/406/407

 

Speaker: Richard Graham, Sr. Solutions Architect

Title: “MPI Forum BoF”

Date: Thursday, November 21, 2013

Time: 12:15PM-1:15PM

Room: 705/707/709/711

P.S.  Stop by the Mellanox booth [2272]  to see our Jelly bean jar.  Comment on this post with your guess, and you could win a $50 Amazon Gift Card!  Winner will be announced at the end of the conference.  Follow all of our activities on our social channels including Twitter, Facebook and our Community!

Guess How Many?

 See you in Denver!

 

 

pak Author: Pak Lui is the Applications Performance Manager at Mellanox Technologies, responsible for managing the application performance, application characterization, profiling and testing. His main focus is to optimize HPC applications on products, explore new technologies and solutions and their effect on real workloads. Pak has been working in the HPC industry for over 12 years. Prior to joining Mellanox Technologies, Pak worked as a Cluster Engineer, responsible for building and testing HPC cluster configurations from different OEMs and ISVs. Pak holds a B.Sc. in Computer Systems Engineering and a M.Sc. in Computer Science from Boston University in the United States.

Mellanox-Based Clouds: A Key Ingredient for Your Start-up Success

Mellanox’s Ethernet and InfiniBand interconnects enable and enhance world-leading cloud infrastructures around the globe. Utilizing Mellanox’s fast server and storage interconnect solutions, these cloud vendors maximized their cloud efficiency and reduced their cost-per-application.

 

Mellanox is now working with a variety of incubators, accelerators, co-working spaces and venture capitalists to introduce these cloud vendors that are based on Mellanox interconnect cloud solution to new evolving startup companies.   These new companies can enjoy best performance with the added benefit of reduced cost, as they advance application development.  In this post, we will discuss the advantages of using Mellanox based clouds.

 

RDMA (Remote Direct Memory Access) is a critical element in building the most scalable and cost-effective cloud environments and to achieve the highest return-on-investment.  For example, Microsoft Azure’s InfiniBand based cloud, as listed on the world’s top performance capable systems (TOP500), demonstrated 33% lower application cost compared to other clouds on the same list.

 

Mellanox’s InfiniBand and RoCE (RDMA over Converged Ethernet) cloud solutions deliver world-leading Ethernet based interconnect density, compute and storage.  Mellanox’s Virtual Protocol Interconnect (VPI) technology incorporates both InfiniBand and Ethernet into the same solution to provide interconnect flexibility for cloud providers.

  • Higher Performance
    • 56Gb/s per port with RDMA
    • 2us for VM to VM connectivity
    • 3.5x faster VM migration
    • 6x faster storage access
  • Cost Effective Storage
    • Higher storage density with RDMA
    • Utilization of existing disk bays
  • Higher Infrastructure Efficiency
    • Support more VMs per server
    • Offload hypervisor CPU
    • Unlimited scalability
    • I/O consolidation (one wire)

 

Accelerating Cloud Performance
Accelerating Cloud Performance

 


 

Don’t waste resources worried about bringing up dedicated cloud infrastructures. Instead, keep your developers focused on developing applications that are strategic to your business. By choosing a RDMA-based cloud from one of our partners, you can be rest assured that you will have the most efficient, scalable, and cost-effective cloud platform available.

 

Learn more, click here:  Mellanox Based Clouds

eli karpilovski
Author: Eli Karpilovski manages the Cloud Market Development at Mellanox Technologies. In addition, Mr. Karpilovski serves as the Cloud Advisory Council Chairman. Mr. Karpilovski served as product manager for the HCA Software division at Mellanox Technologies. Mr. Karpilovski holds a Bachelor of Science in Engineering from the Holon Institute of Technology and a Master of Business Administration from The Open University of Israel.

Mellanox joins the CloudNFV Initiative

Today, we are pleased to announce that Mellanox has joined the CloudNFV initiative as an Integration Partner to contribute to the success of the effort and eco-system.  CloudNFV is a collaboration between its memberCloudNFV companies with a focus to prove the value of NFV.  The organization currently include the following member companies:  6WIND, CIMI, Dell, EnterpriseWeb, Overture, Qosmos, MetaSwitch and Mellanox.

“I am excited to welcome Mellanox into CloudNFV as an Integration Partner.  NFV and the cloud demand an efficient data center network and storage structure, and Mellanox is a global leader in both areas with high-availability and high-performance fabric connectivity that’s a perfect match to NFV implementations.  We’re already working to integrate Mellanox into our lab at Dell’s facilities in California, and they’ll be a key element in our public demonstration of high-performance NFV-based UCC services in December.”  Tom Nolle, President of CIMI Corporation, Chief Architect of CloudNFV

Cloud Network Functions Virtualization (NFV) is an ISG activity within ETSI, dedicated to creating an architecture to host network features and functions on general-purpose servers instead of on purpose-built network appliances or devices.  CloudNFV is a platform to test the integration of Cloud computing, SDN, NFV for Carrier Telecom Cloud.   Read more about this Initiative here.

“We are excited to join and collaborate with the CloudNFV team and contribute to this important initiative. We are integrating Mellanox’s ConnectX-3 PRO 10/40/56Gbps InfiniBand and Ethernet high performance, low latency NIC with the CloudNFV platform to enable data plane network functions running as virtual machines with near bare metal performance in an OpenStack environment through Mellanox Software Define Networking solutions” Eran Bello, Director of Business Development, Mellanox.

To find more information about Mellanox Telecom NFV solutions and CloudNV (http://www.cloudnfv.com) or to schedule a meeting during SDN & OpenFlow World Congress in Bad Homburg, Frankfurt, 15-18 October 2013 please contact Eran Bello at eranb@mellanox.com

 

eran bello
Author: As a Director of Business Development at Mellanox, Eran Bello handles the business, solutions and product development and strategy for the growing Telecom and Security markets. Prior to joining Mellanox, Eran was Director of Sales and Business Development at Anobit Technologies where he was responsible for the development of the ecosystem for Anobit new Enterprise SSD business as well as portfolio introduction and business engagements with key Server OEMs, Storage Solution providers and mega datacenters. Earlier on Eran was VP of Marketing and Sales for North and Central America at Runcom Technologies, the first company to deliver Mobile WiMAX/4G End to End solution and was a member of the WiMAX/4G Forum.

Advantages of RDMA for Big Data Applications

Hadoop MapReduce is the leading Big Data analytics framework. This framework enables data scientists to process data volumes and variety never processed before. The result from this data processing is new business creation and operation efficiency.  

As MapReduce and Hadoop advance, more organizations try to use the frameworks in near real-time capabilities. Leveraging RDMA (Remote Direct Memory Access) capabilities for faster Hadoop MapReduce capabilities has proven to be a successful method.

In our presentation at Oracle Open World 2013, we show the advantages RDMA brings to enterprises deploying Hadoop and other Big Data applications:

-        Double analytics performance, accelerating MapReduce framework

-        Double Hadoop file system ingress capabilities

-        Reducing NoSQL Databases’ latencies by 30%

On the analytics side, UDA (Unstructured Data Accelerator), doubles the computation power by offloading networking and buffer copying from the server’s CPU to the network controller. In addition, a novel shuffle and merge approach helped to achieve the needed performance acceleration. The UDA package is and open source package available here (https://code.google.com/p/uda-plugin/).  The HDFS (Hadoop Distributed File System) layer is also getting its share of performance boost.

While the community continues to improve the feature, work conducted at Ohio State University brings the RDMA capabilities to the data ingress process of HDFS. Initial testing shows over 80% improvement in the data write path to the HDFS repository. The RDMA HDFS acceleration research and downloadable package is available from the  Ohio State University website at: http://hadoop-rdma.cse.ohio-state.edu/

We are expecting more RDMA acceleration enablement to different Big Data frameworks in the future.  If you have a good use case, we will be glad to discuss the need and help with the implementation.

Contact us through the comments section below or at bigdata@mellanox.com

 

eyal gutkind
Author: Eyal Gutkind is a Senior Manager, Enterprise Market Development at Mellanox Technologies focusing on Web 2.0 and Big Data applications. Eyal held several engineering and management roles at Mellanox Technologies over the last 11 years. Eyal Gutkind holds a BSc. degree in Electrical Engineering from Ben Gurion University in Israel and MBA from Fuqua School of Business at Duke University, North Carolina.

Accelerating Red Hat’s new OpenStack cloud platform with Mellanox Interconnect

Red Hat Enterprise Linux OpenStack Platform is a new leading Infrastructure-as-a-Service (IaaS) open-source solution for building and deploying cloud-enabled workloads. This new cloud platform gives customers the agility to scale and quickly meet customer demands without compromising on availability, security, or performance.

Red Hat built an industry leading certification program for their OpenStack platform. By achieving this technology certification, partners can assure customers that their solutions have been validated with Red Hat OpenStack technology.  Anyone who earns this new certification will be able to show that they can accomplish the following tasks:

•             Install and configure Red Hat Enterprise Linux OpenStack Platform.

•             Manage users, projects, flavors, and rules.

•             Configure and manage images.

•             Add compute nodes.

•             Manage storage using Swift and Cinder.

 

Mellanox is listed in the Red Hat marketplace as a certified Hardware partner for Networking (Neutron) and Block Storage (Cinder) services. This ensures that Mellanox ConnectX-3 hardware was tested, certified, and now supported with Red Hat OpenStack technology.

Mellanox Technologies offers seamless integration between its products and Red Hat OpenStack services and provides unique functionality that includes application and storage acceleration, network provisioning, automation, hardware-based security, and isolation. Furthermore, using Mellanox interconnect products allows cloud providers to save significant capital and operational expenses through network and I/O consolidation and by increasing the number of virtual machines (VMs) per server.

With Mellanox ConnectX-3 card and OpenStack plugins, customers will benefit from superior performance and native integration with Neutron:

 

Mellanox OpenStack solution extends the Cinder project by adding iSCSI running over RDMA (iSER). Leveraging RDMA, Mellanox OpenStack delivers 5x better data throughput (for example, increasing from 1GB/s to 5GB/s) and requires up to 80% less CPU utilization.

Eli Blog 092013 Img1

 

Mellanox ConnectX-3 adapters equipped with onboard embedded switch (eSwitch) are capable of performing layer-2 switching for the different VMs running on the server. Using the eSwitch will gain higher performance levels in addition to security and QoS. The eSwitch configuration is transparent to the Red Hat Enterprise Linux OpenStack Platform administrator by using the Mellanox neutron plugin. By implementing a technology called SR-IOV (Single Root IO Virtualization) and running RDMA over eSwitch, we were able to show a dramatic difference (x20) compared to when using para-virtualized vNIC running a TCP stream connectivity.

Eli Blog 092013 Img2

Learn more:

Mellanox and Red Hat OpenStack joint solution - click here

View the Mellanox certificationclick here

eli karpilovski
Author: Eli Karpilovski manages the Cloud Market Development at Mellanox Technologies. In addition, Mr. Karpilovski serves as the Cloud Advisory Council Chairman. Mr. Karpilovski served as product manager for the HCA Software division at Mellanox Technologies. Mr. Karpilovski holds a Bachelor of Science in Engineering from the Holon Institute of Technology and a Master of Business Administration from The Open University of Israel.

Deploying HPC Clusters with Mellanox InfiniBand Interconnect Solutions

High-performance simulations require the most efficient compute platforms. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilization factor and the interconnect performance, efficiency, and scalability. Efficient high-performance computing systems require high-bandwidth, low-latency connections between thousands of multi-processor nodes, as well as high-speed storage systems.

Mellanox has released “Deploying HPC Clusters with Mellanox InfiniBand Interconnect Solutions”.  This guide describes how to design, build, and test a high performance compute (HPC) cluster using Mellanox® InfiniBand interconnect covering the installation and setup of the infrastructure including:

  • HPC cluster design
  • Installation and configuration of the Mellanox Interconnect components
  • Cluster configuration and performance testing

 

 Scot Schlultz Author: Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in March 2013 as Director of HPC and Technical Computing, Schultz is 25-year veteran of the computing industry. Prior to joining Mellanox, he spent the past 17 years at AMD in various engineering and leadership roles, most recently in strategic HPC technology ecosystem enablement. Scot was also instrumental with the growth and development of the Open Fabrics Alliance as co-chair of the board of directors. Scot currently maintains his role as Director of Educational Outreach, founding member of the HPC Advisory Council and of various other industry organizations.