All posts by admin

Are You Limiting Your Flash Performance?

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers.  Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true.  But what if your flash could do even more?

 

One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies.  Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time.  Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology.  A single 1GbE port can transfer data at around 120MB/s.  For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system.  Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph).  Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it?  Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks.  In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear.  So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?

 

Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency.  Your flash system will perform to its fullest potential, and your application performance will improve drastically.  Think land-speed records, except in a data center.

 

Flash and RDMA diagram.png

 

So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.

 

Find out more about the about RoCE and InfiniBand technologies and how they can enhance your storage performance: http://www.mellanox.com/page/storage and http://www.mellanox.com/blog/2013/01/rdma-interconnects-for-storage-fast-efficient-data-delivery/

Why Atlantic.Net Chose Mellanox

Atlantic.Net is a global cloud hosting provider that offers Atlantic.Net can now offer customers more robust cloud hosting services through a reliable, adaptable infrastructure, all at a lower cost in comparison to traditional interconnect solutions.

Why Atlantic.Net Chose Mellanox

  • Price and Cost Advantage

Expensive hardware, overhead costs while scaling, as well as administrative costs can be avoided with Mellanox’s interconnect technologies and thereby reduce costs 32% per application.

  • Lower Latency and Faster Storage Access:

By utilizing the iSCSI RDMA Protocol (iSER) implemented in KVM servers over a single converged InfiniBand interconnect adapter, iSER delivers lower latency and is less complex, resulting in lower costs to the user.

  • Consolidate I/O Transparently

LAN and SAN connectivity for VMs on KVM and Atlantic.Net’s management environment is tightly integrated; allowing Atlantic.Net to transparently consolidate LAN, SAN, live migrations and other traffic.

The Bottom Line

By deploying Mellanox’s InfiniBand solution, Atlantic.Net can support high volume and high-performance requirements– on-demand – and offer a service that scales as customers’ needs change and grow. Having built a high performance, reliable and redundant storage infrastructure using off-the-shelf commodity hardware, Atlantic.Net was able to avoid purchasing expensive Fibre Channel storage arrays, saving significant capital expenses per storage system.

 

http://youtu.be/frTWWwjacyc

The Promise of an End-To-End SDN Solution, can it be done?

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

The new open source cloud orchestration platform called OpenStack is the promise of flexible network virtualization, and network overlays are looking closer than ever. The vision of this platform is to enable the on-demand creation of many distinct networks on top of one underlying physical infrastructure in the cloud environment. The platform will support automated provisioning and management of large groups of virtual machines or compute resources, including extensive monitoring in the cloud.

 

There is still a lot of work to be done, as there are many concerns around the efficiency and simplicity of the management solution for the compute and storage resources. A mature solution will need to incorporate different approaches to interact within the intra-server provisioning, QoS and vNIC management. For example, by leaning on local network adapters that are capable of managing the requests utilizing OpenFlow protocol, or by using a more standard approach which is managed by the switch. Using only one method, might create performance and efficiency penalties.

 

Learn how Mellanox’s OpenStack solution offloads the orchestration platform from the management of individual networking elements, with the end-goal of simplifying operations of large-scale, complex infrastructures www.mellanox.com/openstack

 

Have Questions, Join our Cloud Community Today!

Why I left HP after 19 years to join ProfitBricks

On 02.12.13, In Cloud Computing, by

Pete Johnson, new Platform Evangelist

Woz once said, “I thought I’d be an HPer for life.” While I don’t usually claim to have a whole lot in common with the man who designed the first computer I ever saw (an Apple II, summer ’78), in this instance it’s true. As it turns out, we were both wrong.

Pete Johnson, new Platform Evangelist for ProfitBricks

I stayed at HP as long as I did for lots of reasons. Business model diversity is one:  over the last two decades, I was lucky enough to be a front line coder, a tech lead, a project manager, and an enterprise architect while working on web sites for enterprise support, consumer ecommerce sales, enterprise online sales, all forms of marketing, and even post-sales printing press supplies reordering.   Most recently I was employee #37 for HP’s new public cloud offering where I performed a lot of roles including project management of web development teams, customer facing demonstrations at trade shows, and sales pitches for Fortune 500 CIOs.  But I also remained at HP because of the culture and values that came straight from Bill Hewlett and Dave Packard, which my early mentors instilled in me. You can still find those values there today if you look hard enough, and if anybody gets that, Meg Whitman does.

Why leave HP for ProfitBricks then?

So if I still have such a rosy view of HP, despite recent bumpiness, why did I leave to become the Platform Evangelist for ProfitBricks?

Three reasons:

  1. InfiniBand
  2. InfiniBand
  3. InfiniBand

If you are anything like the sample of computer industry veterans I told about my move last week, you just said, “What the heck is InfiniBand?” Let me explain what it is and why it is poised to fundamentally change the cloud computing.

Ethernet is the dominant network technology used in data centers today. Originally created during the Carter administration, it uses a hierarchical structure of LAN segments, which ultimately means that packets have exactly one path to traverse when moving from point A to point B anywhere in the network. InfiniBand, which is a popular 21st century technology in the supercomputing and high-performance computing (HPC) communities, uses a grid or mesh system that gives packets multiple paths from point A to point B. This key difference, among other nuances, gives InfiniBand a top speed of 80 Gbits/sec, resulting in a speed that is 80x faster than Amazon’s AWS 1Gbit/sec standard Ethernet connections.

What’s the big deal about InfiniBand?

“So what?” you may be thinking. “A faster cloud network is nice, but it doesn’t seem like THAT big a deal.”

Actually, it is a VERY big deal when you stop and think about how a cloud computing provider can take advantage of a network like this.

As founder and CMO Andreas Gauger put it to me during the interview process, virtualization is a game of Tetris in which you are trying to fit various sizes of Virtual Machines on top of physical hardware to maximize utilization. This is particularly critical for a public cloud provider. With InfiniBand, Profit Bricks can rearrange the pieces, and at 80 Gbits/sec, our hyper-visor can move a VM from one physical machine to another without the VM ever knowing. This helps us maximize the physical hardware and keep prices competitive, but it also means two other things for our customers:

  • You can provision any combination of CPU cores and RAM you want, up to and including the size of the full physical hardware we use
  • You can change the number of CPU cores or amount of RAM on-the-fly, live, without rebooting the VM

In a world where other public cloud providers force you into cookie cutter VM sizes in an attempt to simplify the game of Tetris for themselves, the first feature is obviously differentiating. But when most people hear the second one, their reaction is that it can’t possibly be true — it must be a lie. You can’t change virtual hardware on a VM without rebooting it, can you?

No way you can change CPU or RAM without rebooting a VM!

Do you suppose I’d check that out before leaving the only employer I’ve ever known in my adult life?

I spun up a VM, installed Apache, launched a load test from my desktop against the web server I just created, changed both the CPU Cores and RAM on the server instance, confirmed the change at the VM command line, and allowed the load test to end.  You know what the load test log showed?

Number of errors: 0.

The Apache web server never went down, despite the virtual hardware change, and handled HTTP requests every 40 milliseconds. I never even lost my remote login session. Whoa.

But wait, there’s more (and more to come)

Throw in the fact that the ProfitBricks block storage platform takes advantage of InfiBand to not only provide RAID 10 redundancy, but RAID 10 mirrored across two availability zones, and I was completely sold.  I realized that ProfitBricks founder, CTO, and CEO Achim Weiss took the data center efficiency knowledge that gave 1&1 a tremendous price advantage and combined it with supercomputing technology to create a cloud computing game-changer that his engineering team is just beginning to tap into. I can’t wait to see what they do with object storage, databases, and everything else that you’d expect from a fully IaaS offering. I had to be a part of that.

Simply put: ProfitBricks uses InfiniBand to enable Cloud Computing 2.0.

And that’s why, after 19 years, I left HP.

Product Flash – Bridgeworks Potomac 40Gb iSCSI-to-SAS Bridge

Written By: Erin Filliater, Enterprise Market Development Manager

 

The amount of worldwide digital information is growing on a daily basis, and all of that data has to be stored somewhere, usually in external storage infrastructures, systems and devices.  Of course, in order for that information to be useful, you need to have fast access to it when your application calls for it.  Enter Bridgeworks’ newest bridging product, the Potomac ESAS402800 40Gb iSCSI-to-SAS protocol bridge.  The first to take advantage of 40Gb/s data center infrastructures, the ESAS402800 integrates Mellanox 40Gb iSCSI technology to provide the fastest iSCSI SAN connectivity to external SAS devices such as disk arrays, LTO6 tape drives and tape libraries, allowing data center administrators to integrate the newest storage technologies into their environments without disrupting their legacy systems.

In addition to flat-out speed, plug n’ play connectivity and web-based GUI management make the ESAS402800 easy to install and operate.   Adaptive read- and write-forward caching techniques allow the ESAS402800 bridge to share storage effectively in today’s highly virtualized environments.

 

All of this adds up to easier infrastructure upgrades, more effective storage system migration and realization of the full performance potential of new SAS-connected storage systems. Pretty impressive for a single device.

 

Find out more about the recent Potomac ESAS402800 40Gb iSCSI-to-SAS bridge launch at Bridgeworks’ website:

http://www.4bridgeworks.com/news_and_press_releases/press_releases.phtml?id=252&item=26

RDMA – Cloud providers “secret sauce”

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

With expansive growth expected in the cloud-computing market, some researches expects the market will grow from $70.1 billion in 2012 to $158.8 billion in 2014 – cloud service providers must find ways to provide increasingly sustainable performance. At the same time, they must accommodate an increasing number of internet users, whose expectations about improved and consistent response times are growing.

 

However, service providers cannot increase performance if the corresponding cost also rises. What these providers need is a way to deliver low latency, fast response, and increasing performance while minimizing the cost of the network.

 

One good example to accomplish that is RDMA, Traditionally centralized storage was either slow or created bottlenecks and deemphasized the need for fast storage networks. With the advent of fast solid state devices, we are seeing a need for a VERY fast and converged network, to leverage the capabilities that is been offered, in particular, we are starting to see cloud arch using RDMA based storage appliances to accelerate access storage time, reduce latency and achieve the best CPU utilization on the end point.

 

To learn more about the usage of RDMA in providing cloud infrastructure requirements for meeting performance, availability and agility needs, now and in the future check the following link.

 

Mellanox- InfiniBand makes headway in the cloud – YouTube

RDMA Interconnects for Storage: Fast, Efficient Data Delivery

Written By: Erin Filliater, Enterprise Market Development Manager

We all know that we live in a world of data, data and more data. In fact, IDC predicts that in 2015, the amount of data created and replicated will reach nearly 8 Zettabytes. With all of this data stored in external storage systems, the way data is transferred from storage to a server or application becomes critical to effectively utilizing that information. Couple this with today’s shrinking IT budgets and “do more with less” mindsets, and you have a real challenge on your hands. So, what’s a data center storage administrator to do?

Remote Direct Memory Access (RDMA) based interconnects offer an ideal option for boosting data center efficiency, reducing overall complexity and increasing data delivery performance. Available over InfiniBand and Ethernet, with RDMA over Converged Ethernet (RoCE), RDMA allows data to be transferred from storage to server without passing the data through the CPU and main memory path of TCP/IP Ethernet. Greater CPU and overall system efficiencies are attained because the storage and servers’ compute power is used for just that—computing—instead of processing network traffic. Bandwidth and latency are also of interest: both InfiniBand and RoCE feature microsecond transfer latencies, and bandwidths up to 56Gb/s. Plus, both can be effectively used for data center interconnect consolidation. This translates to screamingly fast application performance, better storage and data center utilization and simplified network management.

On a performance basis, RDMA based interconnects are actually more economical than other alternatives, both in initial cost and in operational expenses. Additionally, because RDMA interconnects are available with such high bandwidths, fewer cards and switch ports are needed to achieve the same storage throughput. This enables savings in server PCIe slots and data center floor space, as well as overall power consumption. It’s an actual solution for the “do more with less” mantra.

So, the next time your application performance isn’t making the grade, rather than simply adding more CPUs, storage and resources, maybe it’s time to consider a more efficient data transfer path.

Find out more: http://www.mellanox.com/page/storage

Mellanox Named HP’s 2012 Innovation Supplier of the Year

We’re thrilled to start out 2013 with some great news: Mellanox was named HP’s 2012 Innovation Supplier of the Year at last month’s annual HP Supplier Summit in San Francisco.

Mellanox was chosen as the top supplier out of 600 worldwide contractors across all HP product lines. To determine the winner of the Innovation Supplier of the Year Award, HP evaluated an elite group of suppliers with outstanding performance exemplifying principles of delivering greater value, including enhanced revenue, cost savings and process efficiencies.

Earlier this year, Mellanox announced that its Ethernet and InfiniBand interconnect solutions are now available through HP to deliver leading application performance for the HP ProLiant Generation 8 (Gen8) servers. Specific products available include: Mellanox ConnectX®-3 PCIe 3.0 FDR 56Gb/s InfiniBand adapters and 10/40GbE NICs, and SwitchX® FDR 56Gb/s InfiniBand switch blades and systems. Mellanox offers the only interconnect option for the HP ProLiant Gen8 servers that includes PCIe 3.0-compliant adapters.

We look forward to the continued partnership with HP in 2013. And stay tuned to our blog to learn more about new and innovative partnerships between Mellanox and its customers throughout the year.

Microsoft WHQL Certified Mellanox Windows OpenFabrics Drivers

Last week, Mellanox released the latest Microsoft WHQL certified Mellanox WinOF 2.0 (Windows OpenFabrics) drivers. This provides superior performance for low-latency, high-throughput clusters running on Microsoft Windows® HPC Server 2008.

You may be asking yourself, how does this address my cluster computing needs? Does the Windows OFED stack released by Mellanox provide the same performance seen on the Linux OFED stack release?

Well, the Windows networking stack is optimized to address the needs of various HPC vertical segments. In our benchmark tests with MPI applications that require low-latency and high-performance, the latency is in the low 1us with bandwidth of 3GByte/sec uni-directional using the Microsoft MS-MPI protocol.

Mellanox’s 40Gb/s InfiniBand Adapters (ConnectX) and Switches (InfiniScale IV) with their proven performance efficiency and scalability, allow data centers to scale up to tens-of-thousands of nodes with no drop in performance.  Our drivers and Upper Level Protocols (ULPs) allow end-users to take advantage of the RDMA networking available in Windows® HPC Server 2008.

Here is the link to show the compute efficiency of Mellanox InfiniBand compute nodes compared to Gigabit Ethernet (GigE) compute nodes performing mathematical simulations on Windows® HPC Server 2008.

As the saying goes “The proof is in the pudding.” Mellanox InfiniBand interconnect adapters and technology is the best option for all Enterprise Data Center and High Performance computing needs.

Satish Kikkeri
satish@mellanox.com

QuickTransit Performance Results

As previously suggested, I will review in this post a different application that is focused on converting protocols. QuickTransit, developed by a company called Transitive (recently acquired by IBM), is a cross-platform virtualization technology which allows applications that have been compiled for one operating system and processor to run on servers that use a different processors and operating systems, without requiring any source code or binary changes.

We are using: QuickTransit for Solaris/SPARC-to-Linux/x86-64 which we used to test for Latency by a basic test which was related to the financial-industry operating method and involves interconnect between servers performance.

The Topology we’ve used was 2 servers (the 1st acting as server and the 2nd as a client). We’ve measured Latency with different object sizes and rates when running using the following interconnects GigE, Mellanox ConnectX VPI 10GigE, and Mellanox ConnectX VPI 40Gb/s InfiniBand. I would like to re-iterate, to any of you who have not read the first posts, that we’re committed to our guideline of “out-of-the-box”, meaning that neither the application nor any of the drivers are to be changed after downloading it off of the web.

With InfiniBand we’ve used 3 different Upper-Layers-Protocols (ULPs) – none requiring code intervention; IPoIB connect-mode (CM), IPoIB datagram mode (UD), and Socket-Direct-Protocol (SDP). The results were stunning mainly because our assumption was that with all the layers of software, in addition to the software which converts Sparc Solaris code to x86 Linux code, the interconnect will have small impact, if at all.

We’ve learned that 40Gb/s InfiniBand performance is significantly better then GigE for a wide range of packets size and transmission rates. We could see superiority in latency of over 2x faster when using InfiniBand and 30% faster execution when using 10GigE. Go and beat that…


Let’s look at the results in a couple of different ways. In particular, let’s look at the size of the messages being sent – the above advantage is related to the small message sizes (see graph #2) while when moving to larger message sizes the advantage (which, as it is, is strikingly better) becoming humongous.
 

In my next blog I plan to show more results that are closely related to the financial markets. If anyone out there identifies an application they would like our dedicated team to benchmark, please step forward and send me an e-mail.

Nimrod Gindi
nimrodg@mellanox.com