All posts by admin

Product Flash: NetApp EF540 Enterprise Flash Array

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

 

Everyone knows that flash storage is a big deal.  However, one of the gaps in the flash storage market has been in enterprise flash systems. Flash caching has for some time been in many enterprise storage environments, but enterprise all-flash arrays haven’t.  This week, that all changed with the launch of NetApp’s EF540 Flash Array.  Targeted for business critical applications, the EF540 features the enterprise features we’re used to in a NetApp system: high availability, reliability, manageability, snapshots, synchronous and asynchronous replication and backup and a fully redundant architecture.  Add to that some impressive performance statistics—over 300,000 IOPs, sub-millisecond latency, and 6GB/s throughput—and you have a system to be reckoned with.

NetApp EF540 transparent-sm.png

NetApp® EF540 Flash Array

 

What does all this mean for the IT administrator?  Database application performance boosts of up to 500% over traditional storage infrastructures mean faster business operation results, decreased time-to-market and increased revenue.  Enterprise RAS features lead to less downtime, intuitive management and greater system ROI.

 

Of course, as mentioned earlier in the week in the Are You Limiting Your Flash Performance? post, the network flash systems are connected to also plays a role in boosting performance and reliability.  To this end, NetApp has equipped the EF540 well with 40Gb/s QDR InfiniBand, 10Gb/s iSCSI and 8Gb/s Fibre Channel connectivity options, all with automated I/O path failover for robustness.

 

Following the flash trend, NetApp also announced the all-new FlashRay family of purpose-built enterprise flash arrays, with expected availability in early 2014.  The FlashRay products will focus on efficient, flexible, scale-out architectures to maximize the value of flash installments across the entire enterprise data center stack.  Given all this and the enterprise features of the EF540, there’s no longer a reason not to jump on the flash bandwagon and start moving your enterprise ahead of the game.

 

Find out more about the EF540 Flash Array and FlashRay product family at NetApp’s website: http://www.netapp.com/us/products/storage-systems/flash-ef540/ and http://www.netapp.com/us/company/news/press-releases/news-rel-20130219-678946.aspx

 

Find out more about how Mellanox accelerates NetApp storage solutions at: https://solutionconnection.netapp.com/mellanox-connectx-3-virtual-protocol-interconnect-vpi-adapter-cards.aspx

HP updates server, storage and networking line-ups

 

HP updated its enterprise hardware portfolio with the most notable addition being networking devices that combined wired and wireless infrastructure to better manage bring-your-own-device policies.One of those  highlights is the Mellanox SX1018 HP Ethernet switch, which lowers port latency and improves downlinks.

 

The Mellanox SX1018HP Ethernet Switch is the highest-performing Ethernet fabric solution in a blade switch form factor. It delivers up to 1.36Tb/s of non-blocking throughput perfect for High-Performance Computing, High Frequency Trading and Enterprise Data Center- applications.

 

Utilizing the latest Mellanox SwitchX ASIC technology, the SX1018HP is an ultra-low latency switch that is ideally suited as an access switch providing Infiniband like performance with sixteen 10Gb/40Gb server side downlinks and eighteen 40Gb QSFP+ uplinks to the core with port to port latency as low as 230nS.

 

The Mellanox SX1018HP Ethernet Switch has a rich set of Layer 2 networking and security features and supports faster application performance and enhanced server CPU utilization with RDMA over Converged Ethernet (RoCE), making this switch the perfect solution for any high performance Ethernet network.

 

Mellanox SX1018HP Ethernet Switch

 

HP is the first to provide 40Gb downlinks to each blade server enabling InfiniBand-like performance in an Ethernet blade switch. Another industry first, the low-latency HP SX1018 Ethernet Switch provides the lowest port to port latency of any blade switch, more than four times faster than previous switches

 

When combined with the space, power and cooling benefits of blade servers, the Mellanox SX1018HP Ethernet Blade Switch provides the perfect network interface for Financial applications and high performance clusters.

 

How did Windows Azure achieved performance of 90.2 percent efficiency

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

Windows Azure, one of the largest public cloud providers in the world today, recently ran a system performance benchmark, called LINPACK, to demonstrate the performance capabilities of its ‘Big Compute’ hardware. Windows Azure submitted the results and was certified as one of the world’s largest supercomputers on the TOP500.

 

Results were super impressive – 151.3 TFlops on 8,065 cores with 90.2 percent efficiency, 33% higher efficiency versus other major 10GbE cloud providers that ran the same benchmark!

 

What is their secret? 40Gb/s InfiniBand network with RDMA – the Mellanox way.

 

Learn more about it >>  (http://blogs.msdn.com/b/windowsazure/archive/2012/11/13/windows-azure-benchmarks-show-top-performance-for-big-compute.aspx)

 

Join the Mellanox Cloud Community: http://community.mellanox.com/groups/cloud

Are You Limiting Your Flash Performance?

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers.  Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true.  But what if your flash could do even more?

 

One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies.  Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time.  Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology.  A single 1GbE port can transfer data at around 120MB/s.  For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system.  Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph).  Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it?  Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks.  In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear.  So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?

 

Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency.  Your flash system will perform to its fullest potential, and your application performance will improve drastically.  Think land-speed records, except in a data center.

 

Flash and RDMA diagram.png

 

So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.

 

Find out more about the about RoCE and InfiniBand technologies and how they can enhance your storage performance: http://www.mellanox.com/page/storage and http://www.mellanox.com/blog/2013/01/rdma-interconnects-for-storage-fast-efficient-data-delivery/

Why Atlantic.Net Chose Mellanox

Atlantic.Net is a global cloud hosting provider that offers Atlantic.Net can now offer customers more robust cloud hosting services through a reliable, adaptable infrastructure, all at a lower cost in comparison to traditional interconnect solutions.

Why Atlantic.Net Chose Mellanox

  • Price and Cost Advantage

Expensive hardware, overhead costs while scaling, as well as administrative costs can be avoided with Mellanox’s interconnect technologies and thereby reduce costs 32% per application.

  • Lower Latency and Faster Storage Access:

By utilizing the iSCSI RDMA Protocol (iSER) implemented in KVM servers over a single converged InfiniBand interconnect adapter, iSER delivers lower latency and is less complex, resulting in lower costs to the user.

  • Consolidate I/O Transparently

LAN and SAN connectivity for VMs on KVM and Atlantic.Net’s management environment is tightly integrated; allowing Atlantic.Net to transparently consolidate LAN, SAN, live migrations and other traffic.

The Bottom Line

By deploying Mellanox’s InfiniBand solution, Atlantic.Net can support high volume and high-performance requirements– on-demand – and offer a service that scales as customers’ needs change and grow. Having built a high performance, reliable and redundant storage infrastructure using off-the-shelf commodity hardware, Atlantic.Net was able to avoid purchasing expensive Fibre Channel storage arrays, saving significant capital expenses per storage system.

 

http://youtu.be/frTWWwjacyc

The Promise of an End-To-End SDN Solution, can it be done?

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

The new open source cloud orchestration platform called OpenStack is the promise of flexible network virtualization, and network overlays are looking closer than ever. The vision of this platform is to enable the on-demand creation of many distinct networks on top of one underlying physical infrastructure in the cloud environment. The platform will support automated provisioning and management of large groups of virtual machines or compute resources, including extensive monitoring in the cloud.

 

There is still a lot of work to be done, as there are many concerns around the efficiency and simplicity of the management solution for the compute and storage resources. A mature solution will need to incorporate different approaches to interact within the intra-server provisioning, QoS and vNIC management. For example, by leaning on local network adapters that are capable of managing the requests utilizing OpenFlow protocol, or by using a more standard approach which is managed by the switch. Using only one method, might create performance and efficiency penalties.

 

Learn how Mellanox’s OpenStack solution offloads the orchestration platform from the management of individual networking elements, with the end-goal of simplifying operations of large-scale, complex infrastructures www.mellanox.com/openstack

 

Have Questions, Join our Cloud Community Today!

Why I left HP after 19 years to join ProfitBricks

On 02.12.13, In Cloud Computing, by

Pete Johnson, new Platform Evangelist

Woz once said, “I thought I’d be an HPer for life.” While I don’t usually claim to have a whole lot in common with the man who designed the first computer I ever saw (an Apple II, summer ’78), in this instance it’s true. As it turns out, we were both wrong.

Pete Johnson, new Platform Evangelist for ProfitBricks

I stayed at HP as long as I did for lots of reasons. Business model diversity is one:  over the last two decades, I was lucky enough to be a front line coder, a tech lead, a project manager, and an enterprise architect while working on web sites for enterprise support, consumer ecommerce sales, enterprise online sales, all forms of marketing, and even post-sales printing press supplies reordering.   Most recently I was employee #37 for HP’s new public cloud offering where I performed a lot of roles including project management of web development teams, customer facing demonstrations at trade shows, and sales pitches for Fortune 500 CIOs.  But I also remained at HP because of the culture and values that came straight from Bill Hewlett and Dave Packard, which my early mentors instilled in me. You can still find those values there today if you look hard enough, and if anybody gets that, Meg Whitman does.

Why leave HP for ProfitBricks then?

So if I still have such a rosy view of HP, despite recent bumpiness, why did I leave to become the Platform Evangelist for ProfitBricks?

Three reasons:

  1. InfiniBand
  2. InfiniBand
  3. InfiniBand

If you are anything like the sample of computer industry veterans I told about my move last week, you just said, “What the heck is InfiniBand?” Let me explain what it is and why it is poised to fundamentally change the cloud computing.

Ethernet is the dominant network technology used in data centers today. Originally created during the Carter administration, it uses a hierarchical structure of LAN segments, which ultimately means that packets have exactly one path to traverse when moving from point A to point B anywhere in the network. InfiniBand, which is a popular 21st century technology in the supercomputing and high-performance computing (HPC) communities, uses a grid or mesh system that gives packets multiple paths from point A to point B. This key difference, among other nuances, gives InfiniBand a top speed of 80 Gbits/sec, resulting in a speed that is 80x faster than Amazon’s AWS 1Gbit/sec standard Ethernet connections.

What’s the big deal about InfiniBand?

“So what?” you may be thinking. “A faster cloud network is nice, but it doesn’t seem like THAT big a deal.”

Actually, it is a VERY big deal when you stop and think about how a cloud computing provider can take advantage of a network like this.

As founder and CMO Andreas Gauger put it to me during the interview process, virtualization is a game of Tetris in which you are trying to fit various sizes of Virtual Machines on top of physical hardware to maximize utilization. This is particularly critical for a public cloud provider. With InfiniBand, Profit Bricks can rearrange the pieces, and at 80 Gbits/sec, our hyper-visor can move a VM from one physical machine to another without the VM ever knowing. This helps us maximize the physical hardware and keep prices competitive, but it also means two other things for our customers:

  • You can provision any combination of CPU cores and RAM you want, up to and including the size of the full physical hardware we use
  • You can change the number of CPU cores or amount of RAM on-the-fly, live, without rebooting the VM

In a world where other public cloud providers force you into cookie cutter VM sizes in an attempt to simplify the game of Tetris for themselves, the first feature is obviously differentiating. But when most people hear the second one, their reaction is that it can’t possibly be true — it must be a lie. You can’t change virtual hardware on a VM without rebooting it, can you?

No way you can change CPU or RAM without rebooting a VM!

Do you suppose I’d check that out before leaving the only employer I’ve ever known in my adult life?

I spun up a VM, installed Apache, launched a load test from my desktop against the web server I just created, changed both the CPU Cores and RAM on the server instance, confirmed the change at the VM command line, and allowed the load test to end.  You know what the load test log showed?

Number of errors: 0.

The Apache web server never went down, despite the virtual hardware change, and handled HTTP requests every 40 milliseconds. I never even lost my remote login session. Whoa.

But wait, there’s more (and more to come)

Throw in the fact that the ProfitBricks block storage platform takes advantage of InfiBand to not only provide RAID 10 redundancy, but RAID 10 mirrored across two availability zones, and I was completely sold.  I realized that ProfitBricks founder, CTO, and CEO Achim Weiss took the data center efficiency knowledge that gave 1&1 a tremendous price advantage and combined it with supercomputing technology to create a cloud computing game-changer that his engineering team is just beginning to tap into. I can’t wait to see what they do with object storage, databases, and everything else that you’d expect from a fully IaaS offering. I had to be a part of that.

Simply put: ProfitBricks uses InfiniBand to enable Cloud Computing 2.0.

And that’s why, after 19 years, I left HP.

Product Flash – Bridgeworks Potomac 40Gb iSCSI-to-SAS Bridge

Written By: Erin Filliater, Enterprise Market Development Manager

 

The amount of worldwide digital information is growing on a daily basis, and all of that data has to be stored somewhere, usually in external storage infrastructures, systems and devices.  Of course, in order for that information to be useful, you need to have fast access to it when your application calls for it.  Enter Bridgeworks’ newest bridging product, the Potomac ESAS402800 40Gb iSCSI-to-SAS protocol bridge.  The first to take advantage of 40Gb/s data center infrastructures, the ESAS402800 integrates Mellanox 40Gb iSCSI technology to provide the fastest iSCSI SAN connectivity to external SAS devices such as disk arrays, LTO6 tape drives and tape libraries, allowing data center administrators to integrate the newest storage technologies into their environments without disrupting their legacy systems.

In addition to flat-out speed, plug n’ play connectivity and web-based GUI management make the ESAS402800 easy to install and operate.   Adaptive read- and write-forward caching techniques allow the ESAS402800 bridge to share storage effectively in today’s highly virtualized environments.

 

All of this adds up to easier infrastructure upgrades, more effective storage system migration and realization of the full performance potential of new SAS-connected storage systems. Pretty impressive for a single device.

 

Find out more about the recent Potomac ESAS402800 40Gb iSCSI-to-SAS bridge launch at Bridgeworks’ website:

http://www.4bridgeworks.com/news_and_press_releases/press_releases.phtml?id=252&item=26

RDMA – Cloud providers “secret sauce”

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

With expansive growth expected in the cloud-computing market, some researches expects the market will grow from $70.1 billion in 2012 to $158.8 billion in 2014 – cloud service providers must find ways to provide increasingly sustainable performance. At the same time, they must accommodate an increasing number of internet users, whose expectations about improved and consistent response times are growing.

 

However, service providers cannot increase performance if the corresponding cost also rises. What these providers need is a way to deliver low latency, fast response, and increasing performance while minimizing the cost of the network.

 

One good example to accomplish that is RDMA, Traditionally centralized storage was either slow or created bottlenecks and deemphasized the need for fast storage networks. With the advent of fast solid state devices, we are seeing a need for a VERY fast and converged network, to leverage the capabilities that is been offered, in particular, we are starting to see cloud arch using RDMA based storage appliances to accelerate access storage time, reduce latency and achieve the best CPU utilization on the end point.

 

To learn more about the usage of RDMA in providing cloud infrastructure requirements for meeting performance, availability and agility needs, now and in the future check the following link.

 

Mellanox- InfiniBand makes headway in the cloud – YouTube

RDMA Interconnects for Storage: Fast, Efficient Data Delivery

Written By: Erin Filliater, Enterprise Market Development Manager

We all know that we live in a world of data, data and more data. In fact, IDC predicts that in 2015, the amount of data created and replicated will reach nearly 8 Zettabytes. With all of this data stored in external storage systems, the way data is transferred from storage to a server or application becomes critical to effectively utilizing that information. Couple this with today’s shrinking IT budgets and “do more with less” mindsets, and you have a real challenge on your hands. So, what’s a data center storage administrator to do?

Remote Direct Memory Access (RDMA) based interconnects offer an ideal option for boosting data center efficiency, reducing overall complexity and increasing data delivery performance. Available over InfiniBand and Ethernet, with RDMA over Converged Ethernet (RoCE), RDMA allows data to be transferred from storage to server without passing the data through the CPU and main memory path of TCP/IP Ethernet. Greater CPU and overall system efficiencies are attained because the storage and servers’ compute power is used for just that—computing—instead of processing network traffic. Bandwidth and latency are also of interest: both InfiniBand and RoCE feature microsecond transfer latencies, and bandwidths up to 56Gb/s. Plus, both can be effectively used for data center interconnect consolidation. This translates to screamingly fast application performance, better storage and data center utilization and simplified network management.

On a performance basis, RDMA based interconnects are actually more economical than other alternatives, both in initial cost and in operational expenses. Additionally, because RDMA interconnects are available with such high bandwidths, fewer cards and switch ports are needed to achieve the same storage throughput. This enables savings in server PCIe slots and data center floor space, as well as overall power consumption. It’s an actual solution for the “do more with less” mantra.

So, the next time your application performance isn’t making the grade, rather than simply adding more CPUs, storage and resources, maybe it’s time to consider a more efficient data transfer path.

Find out more: http://www.mellanox.com/page/storage