It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!
Why would you vote for Mellanox proposals? Here are your top three reasons:
Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been akey Ethernet switching partner of white box hotties such as Cumulus Networks.
Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.
Without further ado, here is our list of proposals for the Telco Strategies track. Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!
During the last couple of years, the networking industry has invested a lot of effort into developing Software Defined Network (SDN) technology, which is drastically changing data center architecture and enabling large-scale clouds without significantly escalating the TCO (Total Cost of Ownership).
The secret of SDN is not that it enables control of data center traffic via software–it’s not like IT managers were using screwdrivers before to manage the network–but rather that it affords the ability to decouple the control path from the data path. This represents a major shift from the traditional data center networking architecture and therefore offers agility and better economics in modern deployments.
For readers who not familiar with SDN, a simple example can demonstrate the efficiency that SDN provides: Imagine a traffic light that makes its own decisions as to when to change and sends data to the other lamps. Now imagine if that were changed to a centralized control system that takes a global view of the entire traffic pattern throughout the city and therefore makes smarter decisions on how to route the traffic.
The centralized control unit tells each of the lights what to do (using a standard protocol), reducing the complexity of the local units while increasing overall agility. For example, in an emergency, the system can reroute traffic and allow rescue vehicles faster access to the source of the issue.
Today’s data centers demand that the underlying interconnect provide the utmost bandwidth and extremely low latency. While high bandwidth is important, it is not worth much without low latency. Moving large amounts of data through a network can be achieved with TCP/IP, but only RDMA can produce the low latency that avoids costly transmission delays.
The speedy transfer of data is critical to it being used efficiently. Interconnect based on Remote Direct Memory Access (RDMA) offers the ideal option for boosting data center efficiency, reducing overall complexity, and increasing data delivery performance. Mellanox RDMA enables sub-microsecond latency and up to 56Gb/s bandwidth, translating to screamingly fast application performance, better storage and data center utilization, and simplified network management.
Mellanox’s Ethernet and InfiniBand interconnects enable and enhance world-leading cloud infrastructures around the globe. Utilizing Mellanox’s fast server and storage interconnect solutions, these cloud vendors maximized their cloud efficiency and reduced their cost-per-application.
Mellanox is now working with a variety of incubators, accelerators, co-working spaces and venture capitalists to introduce these cloud vendors that are based on Mellanox interconnect cloud solution to new evolving startup companies. These new companies can enjoy best performance with the added benefit of reduced cost, as they advance application development. In this post, we will discuss the advantages of using Mellanox based clouds.
RDMA (Remote Direct Memory Access) is a critical element in building the most scalable and cost-effective cloud environments and to achieve the highest return-on-investment. For example, Microsoft Azure’s InfiniBand based cloud, as listed on the world’s top performance capable systems (TOP500), demonstrated 33% lower application cost compared to other clouds on the same list.
Mellanox’s InfiniBand and RoCE (RDMA over Converged Ethernet) cloud solutions deliver world-leading Ethernet based interconnect density, compute and storage. Mellanox’s Virtual Protocol Interconnect (VPI) technology incorporates both InfiniBand and Ethernet into the same solution to provide interconnect flexibility for cloud providers.
56Gb/s per port with RDMA
2us for VM to VM connectivity
3.5x faster VM migration
6x faster storage access
Cost Effective Storage
Higher storage density with RDMA
Utilization of existing disk bays
Higher Infrastructure Efficiency
Support more VMs per server
Offload hypervisor CPU
I/O consolidation (one wire)
Don’t waste resources worried about bringing up dedicated cloud infrastructures. Instead, keep your developers focused on developing applications that are strategic to your business. By choosing a RDMA-based cloud from one of our partners, you can be rest assured that you will have the most efficient, scalable, and cost-effective cloud platform available.
Author: Eli Karpilovski manages the Cloud Market Development at Mellanox Technologies. In addition, Mr. Karpilovski serves as the Cloud Advisory Council Chairman. Mr. Karpilovski served as product manager for the HCA Software division at Mellanox Technologies. Mr. Karpilovski holds a Bachelor of Science in Engineering from the Holon Institute of Technology and a Master of Business Administration from The Open University of Israel.
One of the barriers to adoption of blade server technology has been the reliance on a limited number of network switches available. Organizations requiring unique switching capabilities or extra bandwidth have had to rely on Top of Rack switches built by networking companies that have little or no presence in the server market. The result was a potential customer base of users who wanted to realize the benefits of blade server technology but were forced to remain with rack servers and switches due to a lack of alternative networking products. Here’s where Hewlett Packard has once again shown why they remain the leader in blade server technology by announcing a new blade switch that leaves the others in the dust.
Mellanox SX1018HP Ethernet Blade Switch
Working closely with our partner Mellanox, HP has just announced a new blade switch for the c-Class enclosure that is designed specifically for customers that demand performance and raw bandwidth. The Mellanox SX1018HP is built on the latest SwitchX ASIC technology and for the first time gives servers a direct path to 40Gb. In fact this switch can provide up to sixteen 40Gb server downlinks and up to eighteen 40Gb network uplinks for an amazing 1.3Tb/s of throughput. Now even the most demanding virtualized server applications can get the bandwidth they need. Financial service customers and especially those involved in High Frequency Trading look to squeeze every drop of latency out of their network. Again, the Mellanox SX1018HP excels, dropping port to port latency to an industry leading 230nS at 40Gb. There is no other blade switch currently available that can make that claim.
For customers currently running Infiniband networks, the appeal of being able to collapse their data requirements onto a single network has always been tempered by the lack of support for Remote Direct Memory Access (RDMA) on Ethernet networks. Again, HP and Mellanox lead the way in blade switches. The SX1018HP supports RDMA over Converged Ethernet (RoCE) allowing those RDMA tuned applications to work across both Infiniband and Ethernet networks. When coupled with the recently announced HP544M 40Gb Ethernet/FDR Infiniband adapter, customers can now support RDMA end to end on either network and begin the migration to a single Ethernet infrastructure. Finally, many customers already familiar with Mellanox IB switches provision and manage their network with Unified Fabric Manager (UFM). The SX1018HP can be managed and provisioned with this same tool, providing a seamless transition to the Ethernet word. Of course standard CLI and secure web browser management is also available.
Incorporating this switch along with the latest generation of HP blade servers and network adapters now gives any customer the same speed, performance and scalability that was previously limited to rack deployments using a hodgepodge of suppliers. Data center operations that cater to High Performance Cluster Computing (HPCC), Telecom, Cloud Hosting Services and Financial Services will find the HP blade server/Mellanox SX1018HP blade switch a compelling and unbeatable solution.
Click here for more information on the new Mellanox SX1018HP Ethernet Blade Switch.
As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers. Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true. But what if your flash could do even more?
One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies. Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time. Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology. A single 1GbE port can transfer data at around 120MB/s. For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system. Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph). Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it? Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks. In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear. So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?
Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency. Your flash system will perform to its fullest potential, and your application performance will improve drastically. Think land-speed records, except in a data center.
So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.
Written By: Erin Filliater, Enterprise Market Development Manager
We all know that we live in a world of data, data and more data. In fact, IDC predicts that in 2015, the amount of data created and replicated will reach nearly 8 Zettabytes. With all of this data stored in external storage systems, the way data is transferred from storage to a server or application becomes critical to effectively utilizing that information. Couple this with today’s shrinking IT budgets and “do more with less” mindsets, and you have a real challenge on your hands. So, what’s a data center storage administrator to do?
Remote Direct Memory Access (RDMA) based interconnects offer an ideal option for boosting data center efficiency, reducing overall complexity and increasing data delivery performance. Available over InfiniBand and Ethernet, with RDMA over Converged Ethernet (RoCE), RDMA allows data to be transferred from storage to server without passing the data through the CPU and main memory path of TCP/IP Ethernet. Greater CPU and overall system efficiencies are attained because the storage and servers’ compute power is used for just that—computing—instead of processing network traffic. Bandwidth and latency are also of interest: both InfiniBand and RoCE feature microsecond transfer latencies, and bandwidths up to 56Gb/s. Plus, both can be effectively used for data center interconnect consolidation. This translates to screamingly fast application performance, better storage and data center utilization and simplified network management.
On a performance basis, RDMA based interconnects are actually more economical than other alternatives, both in initial cost and in operational expenses. Additionally, because RDMA interconnects are available with such high bandwidths, fewer cards and switch ports are needed to achieve the same storage throughput. This enables savings in server PCIe slots and data center floor space, as well as overall power consumption. It’s an actual solution for the “do more with less” mantra.
So, the next time your application performance isn’t making the grade, rather than simply adding more CPUs, storage and resources, maybe it’s time to consider a more efficient data transfer path.