Monthly Archives: February 2013

Mellanox Joins the Network Intelligence Alliance

We are happy to join the Network Intelligence Alliance, an industry organization created for collaboration among the Network Economy’s technology providers. Through our participation in the Alliance, Mellanox will help develop and market innovative solutions that further improve networking solutions for Enterprise, Cloud providers and Telecom Operators.

 

Using Mellanox’s low-latency, CPU efficient 10/40GbE NICs and switches, customers can deploy an embedded virtual switch (eSwitch) to run virtual machine traffic with bare-metal performance, provide hardened security and QoS, all with simpler management through Software Defined Networking (SDN) and OpenFlow APIs. The hardware-based security and isolation features in our 10/40GbE solutions can enable wider adoption of multi-tenant clouds while maintaining user service level agreements (SLA). In addition, utilizing SR-IOV to bypass the Hypervisor, customers will gain more VMs when virtualizing Network Functions on their cloud and data center server and storage infrastructure deployments.

 

In a world that now depends and runs on networks—accurate visibility and precise tracking of data crossing networks have become crucial to the availability, performance and security of applications and services. The growing complexity of IP transactions, the explosion of mobile applications, and the mainstream adoption of cloud computing surpass the capabilities of conventional tools to improve how networks operate, expand services, and cope with cybersecurity.  Just like Business Intelligence solutions emerged to unlock information hidden in the enterprise, Network Intelligence technology is an emerging category of technology to reveal the critical details of the data locked inside network traffic and transactions.
Mellanox is excited to be a part of this great group and we are looking forward to collaborating with other members.

http://www.mellanox.com/

Mellanox encourages you to join our community and follow us on; LinkedIn, Mellanox Blog, Twitter, YouTube, Mellanox Community

Xyratex Advances Lustre Initiative

 

The Lustre® file system has played a significant role in the high performance computing industry since its release in 2003.  Lustre is used in many of the top HPC supercomputers in the world today, and has a strong development community behind it.  Last week, Xyratex announced plans to purchase the Lustre trademark, logo, website and associated intellectual property from Oracle, who acquired them with the purchase of Sun Microsystems in 2010. Xyratex will assume responsibility for customer support for Lustre and has pledged to continue its investment in and support of the open source community development.

 

Both Xyratex and the Lustre community will benefit from the purchase. The Lustre community now has an active, stable promoter whose experience and expertise is aligned with their major market segment, HPC, and Xyratex can confidently continue to leverage the Lustre file system to drive increased value in their ClusterStor™ product line, which integrates Mellanox InfiniBand and Ethernet solutions. In a blog post from Ken Claffey on the Xyratex website, the point was made that Xyratex’ investment in Lustre is particularly important to the company, as Xyratex sees its business “indelibly intertwined with the health and vibrancy of the Lustre community” and offers all of its storage solutions based on the Lustre file system. Sounds like a winning proposition for both sides.

 

Find out more about Xyratex’ acquisition of Lustre: http://www.xyratex.com/news/press-releases/xyratex-advances-lustre%C2%AE-initiative-assumes-ownership-related-assets

 

http://community.mellanox.com/groups/storage

 

 

The Mellanox SX1018HP is a game changer for squeezing every drop of latency out of your network

Guest blog by Steve Barry, Product Line Manager for HP Ethernet Blade Switches

One of the barriers to adoption of blade server technology has been the reliance on a limited number of network switches available.  Organizations requiring unique switching capabilities or extra bandwidth have had to rely on Top of Rack switches built by networking companies that have little or no presence in the server market. The result was a potential customer base of users who wanted to realize the benefits of blade server technology but were forced to remain with rack servers and switches due to a lack of alternative networking products. Here’s where Hewlett Packard has once again shown why they remain the leader in blade server technology by announcing a new blade switch that leaves the others in the dust.

 

MellanoxSX1018HPenetSwitch_front_small.jpg         MellanoxSX1018HPenetSwitch_left_small.jpg

                                         Mellanox SX1018HP Ethernet Blade Switch

    

 

Working closely with our partner Mellanox, HP has just announced a new blade switch for the c-Class enclosure that is designed specifically for customers that demand performance and raw bandwidth. The Mellanox SX1018HP is built on the latest SwitchX ASIC technology and for the first time gives servers a direct path to 40Gb. In fact this switch can provide up to sixteen 40Gb server downlinks and up to eighteen 40Gb network uplinks for an amazing 1.3Tb/s of throughput. Now even the most demanding virtualized server applications can get the bandwidth they need. Financial service customers and especially those involved in High Frequency Trading look to squeeze every drop of latency out of their network. Again, the Mellanox SX1018HP excels, dropping port to port latency to an industry leading 230nS at 40Gb. There is no other blade switch currently available that can make that claim.

For customers currently running Infiniband networks, the appeal of being able to collapse their data requirements onto a single network has always been tempered by the lack of support for Remote Direct Memory Access (RDMA) on Ethernet networks. Again, HP and Mellanox lead the way in blade switches. The SX1018HP supports RDMA over Converged Ethernet (RoCE) allowing those RDMA tuned applications to work across both Infiniband and Ethernet networks. When coupled with the recently announced HP544M 40Gb Ethernet/FDR Infiniband adapter, customers can now support RDMA end to end on either network and begin the migration to a single Ethernet infrastructure. Finally, many customers already familiar with Mellanox IB switches provision and manage their network with Unified Fabric Manager (UFM). The SX1018HP can be managed and provisioned with this same tool, providing a seamless transition to the Ethernet word. Of course standard CLI and secure web browser management is also available.

Incorporating this switch along with the latest generation of HP blade servers and network adapters now gives any customer the same speed, performance and scalability that was previously limited to rack deployments using a hodgepodge of suppliers.   Data center operations that cater to High Performance Cluster Computing (HPCC), Telecom, Cloud Hosting Services and Financial Services will find the HP blade server/Mellanox SX1018HP blade switch a compelling and unbeatable solution.

 

 Click here for more information on the new Mellanox SX1018HP Ethernet Blade Switch.

Product Flash: NetApp EF540 Enterprise Flash Array

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

 

Everyone knows that flash storage is a big deal.  However, one of the gaps in the flash storage market has been in enterprise flash systems. Flash caching has for some time been in many enterprise storage environments, but enterprise all-flash arrays haven’t.  This week, that all changed with the launch of NetApp’s EF540 Flash Array.  Targeted for business critical applications, the EF540 features the enterprise features we’re used to in a NetApp system: high availability, reliability, manageability, snapshots, synchronous and asynchronous replication and backup and a fully redundant architecture.  Add to that some impressive performance statistics—over 300,000 IOPs, sub-millisecond latency, and 6GB/s throughput—and you have a system to be reckoned with.

NetApp EF540 transparent-sm.png

NetApp® EF540 Flash Array

 

What does all this mean for the IT administrator?  Database application performance boosts of up to 500% over traditional storage infrastructures mean faster business operation results, decreased time-to-market and increased revenue.  Enterprise RAS features lead to less downtime, intuitive management and greater system ROI.

 

Of course, as mentioned earlier in the week in the Are You Limiting Your Flash Performance? post, the network flash systems are connected to also plays a role in boosting performance and reliability.  To this end, NetApp has equipped the EF540 well with 40Gb/s QDR InfiniBand, 10Gb/s iSCSI and 8Gb/s Fibre Channel connectivity options, all with automated I/O path failover for robustness.

 

Following the flash trend, NetApp also announced the all-new FlashRay family of purpose-built enterprise flash arrays, with expected availability in early 2014.  The FlashRay products will focus on efficient, flexible, scale-out architectures to maximize the value of flash installments across the entire enterprise data center stack.  Given all this and the enterprise features of the EF540, there’s no longer a reason not to jump on the flash bandwagon and start moving your enterprise ahead of the game.

 

Find out more about the EF540 Flash Array and FlashRay product family at NetApp’s website: http://www.netapp.com/us/products/storage-systems/flash-ef540/ and http://www.netapp.com/us/company/news/press-releases/news-rel-20130219-678946.aspx

 

Find out more about how Mellanox accelerates NetApp storage solutions at: https://solutionconnection.netapp.com/mellanox-connectx-3-virtual-protocol-interconnect-vpi-adapter-cards.aspx

HP updates server, storage and networking line-ups

 

HP updated its enterprise hardware portfolio with the most notable addition being networking devices that combined wired and wireless infrastructure to better manage bring-your-own-device policies.One of those  highlights is the Mellanox SX1018 HP Ethernet switch, which lowers port latency and improves downlinks.

 

The Mellanox SX1018HP Ethernet Switch is the highest-performing Ethernet fabric solution in a blade switch form factor. It delivers up to 1.36Tb/s of non-blocking throughput perfect for High-Performance Computing, High Frequency Trading and Enterprise Data Center- applications.

 

Utilizing the latest Mellanox SwitchX ASIC technology, the SX1018HP is an ultra-low latency switch that is ideally suited as an access switch providing Infiniband like performance with sixteen 10Gb/40Gb server side downlinks and eighteen 40Gb QSFP+ uplinks to the core with port to port latency as low as 230nS.

 

The Mellanox SX1018HP Ethernet Switch has a rich set of Layer 2 networking and security features and supports faster application performance and enhanced server CPU utilization with RDMA over Converged Ethernet (RoCE), making this switch the perfect solution for any high performance Ethernet network.

 

Mellanox SX1018HP Ethernet Switch

 

HP is the first to provide 40Gb downlinks to each blade server enabling InfiniBand-like performance in an Ethernet blade switch. Another industry first, the low-latency HP SX1018 Ethernet Switch provides the lowest port to port latency of any blade switch, more than four times faster than previous switches

 

When combined with the space, power and cooling benefits of blade servers, the Mellanox SX1018HP Ethernet Blade Switch provides the perfect network interface for Financial applications and high performance clusters.

 

How did Windows Azure achieved performance of 90.2 percent efficiency

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

Windows Azure, one of the largest public cloud providers in the world today, recently ran a system performance benchmark, called LINPACK, to demonstrate the performance capabilities of its ‘Big Compute’ hardware. Windows Azure submitted the results and was certified as one of the world’s largest supercomputers on the TOP500.

 

Results were super impressive – 151.3 TFlops on 8,065 cores with 90.2 percent efficiency, 33% higher efficiency versus other major 10GbE cloud providers that ran the same benchmark!

 

What is their secret? 40Gb/s InfiniBand network with RDMA – the Mellanox way.

 

Learn more about it >>  (http://blogs.msdn.com/b/windowsazure/archive/2012/11/13/windows-azure-benchmarks-show-top-performance-for-big-compute.aspx)

 

Join the Mellanox Cloud Community: http://community.mellanox.com/groups/cloud

Are You Limiting Your Flash Performance?

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers.  Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true.  But what if your flash could do even more?

 

One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies.  Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time.  Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology.  A single 1GbE port can transfer data at around 120MB/s.  For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system.  Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph).  Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it?  Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks.  In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear.  So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?

 

Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency.  Your flash system will perform to its fullest potential, and your application performance will improve drastically.  Think land-speed records, except in a data center.

 

Flash and RDMA diagram.png

 

So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.

 

Find out more about the about RoCE and InfiniBand technologies and how they can enhance your storage performance: http://www.mellanox.com/page/storage and http://www.mellanox.com/blog/2013/01/rdma-interconnects-for-storage-fast-efficient-data-delivery/

Why Atlantic.Net Chose Mellanox

Atlantic.Net is a global cloud hosting provider that offers Atlantic.Net can now offer customers more robust cloud hosting services through a reliable, adaptable infrastructure, all at a lower cost in comparison to traditional interconnect solutions.

Why Atlantic.Net Chose Mellanox

  • Price and Cost Advantage

Expensive hardware, overhead costs while scaling, as well as administrative costs can be avoided with Mellanox’s interconnect technologies and thereby reduce costs 32% per application.

  • Lower Latency and Faster Storage Access:

By utilizing the iSCSI RDMA Protocol (iSER) implemented in KVM servers over a single converged InfiniBand interconnect adapter, iSER delivers lower latency and is less complex, resulting in lower costs to the user.

  • Consolidate I/O Transparently

LAN and SAN connectivity for VMs on KVM and Atlantic.Net’s management environment is tightly integrated; allowing Atlantic.Net to transparently consolidate LAN, SAN, live migrations and other traffic.

The Bottom Line

By deploying Mellanox’s InfiniBand solution, Atlantic.Net can support high volume and high-performance requirements– on-demand – and offer a service that scales as customers’ needs change and grow. Having built a high performance, reliable and redundant storage infrastructure using off-the-shelf commodity hardware, Atlantic.Net was able to avoid purchasing expensive Fibre Channel storage arrays, saving significant capital expenses per storage system.

 

http://youtu.be/frTWWwjacyc

The Promise of an End-To-End SDN Solution, can it be done?

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

The new open source cloud orchestration platform called OpenStack is the promise of flexible network virtualization, and network overlays are looking closer than ever. The vision of this platform is to enable the on-demand creation of many distinct networks on top of one underlying physical infrastructure in the cloud environment. The platform will support automated provisioning and management of large groups of virtual machines or compute resources, including extensive monitoring in the cloud.

 

There is still a lot of work to be done, as there are many concerns around the efficiency and simplicity of the management solution for the compute and storage resources. A mature solution will need to incorporate different approaches to interact within the intra-server provisioning, QoS and vNIC management. For example, by leaning on local network adapters that are capable of managing the requests utilizing OpenFlow protocol, or by using a more standard approach which is managed by the switch. Using only one method, might create performance and efficiency penalties.

 

Learn how Mellanox’s OpenStack solution offloads the orchestration platform from the management of individual networking elements, with the end-goal of simplifying operations of large-scale, complex infrastructures www.mellanox.com/openstack

 

Have Questions, Join our Cloud Community Today!

Why I left HP after 19 years to join ProfitBricks

On 02.12.13, In Cloud Computing, by

Pete Johnson, new Platform Evangelist

Woz once said, “I thought I’d be an HPer for life.” While I don’t usually claim to have a whole lot in common with the man who designed the first computer I ever saw (an Apple II, summer ’78), in this instance it’s true. As it turns out, we were both wrong.

Pete Johnson, new Platform Evangelist for ProfitBricks

I stayed at HP as long as I did for lots of reasons. Business model diversity is one:  over the last two decades, I was lucky enough to be a front line coder, a tech lead, a project manager, and an enterprise architect while working on web sites for enterprise support, consumer ecommerce sales, enterprise online sales, all forms of marketing, and even post-sales printing press supplies reordering.   Most recently I was employee #37 for HP’s new public cloud offering where I performed a lot of roles including project management of web development teams, customer facing demonstrations at trade shows, and sales pitches for Fortune 500 CIOs.  But I also remained at HP because of the culture and values that came straight from Bill Hewlett and Dave Packard, which my early mentors instilled in me. You can still find those values there today if you look hard enough, and if anybody gets that, Meg Whitman does.

Why leave HP for ProfitBricks then?

So if I still have such a rosy view of HP, despite recent bumpiness, why did I leave to become the Platform Evangelist for ProfitBricks?

Three reasons:

  1. InfiniBand
  2. InfiniBand
  3. InfiniBand

If you are anything like the sample of computer industry veterans I told about my move last week, you just said, “What the heck is InfiniBand?” Let me explain what it is and why it is poised to fundamentally change the cloud computing.

Ethernet is the dominant network technology used in data centers today. Originally created during the Carter administration, it uses a hierarchical structure of LAN segments, which ultimately means that packets have exactly one path to traverse when moving from point A to point B anywhere in the network. InfiniBand, which is a popular 21st century technology in the supercomputing and high-performance computing (HPC) communities, uses a grid or mesh system that gives packets multiple paths from point A to point B. This key difference, among other nuances, gives InfiniBand a top speed of 80 Gbits/sec, resulting in a speed that is 80x faster than Amazon’s AWS 1Gbit/sec standard Ethernet connections.

What’s the big deal about InfiniBand?

“So what?” you may be thinking. “A faster cloud network is nice, but it doesn’t seem like THAT big a deal.”

Actually, it is a VERY big deal when you stop and think about how a cloud computing provider can take advantage of a network like this.

As founder and CMO Andreas Gauger put it to me during the interview process, virtualization is a game of Tetris in which you are trying to fit various sizes of Virtual Machines on top of physical hardware to maximize utilization. This is particularly critical for a public cloud provider. With InfiniBand, Profit Bricks can rearrange the pieces, and at 80 Gbits/sec, our hyper-visor can move a VM from one physical machine to another without the VM ever knowing. This helps us maximize the physical hardware and keep prices competitive, but it also means two other things for our customers:

  • You can provision any combination of CPU cores and RAM you want, up to and including the size of the full physical hardware we use
  • You can change the number of CPU cores or amount of RAM on-the-fly, live, without rebooting the VM

In a world where other public cloud providers force you into cookie cutter VM sizes in an attempt to simplify the game of Tetris for themselves, the first feature is obviously differentiating. But when most people hear the second one, their reaction is that it can’t possibly be true — it must be a lie. You can’t change virtual hardware on a VM without rebooting it, can you?

No way you can change CPU or RAM without rebooting a VM!

Do you suppose I’d check that out before leaving the only employer I’ve ever known in my adult life?

I spun up a VM, installed Apache, launched a load test from my desktop against the web server I just created, changed both the CPU Cores and RAM on the server instance, confirmed the change at the VM command line, and allowed the load test to end.  You know what the load test log showed?

Number of errors: 0.

The Apache web server never went down, despite the virtual hardware change, and handled HTTP requests every 40 milliseconds. I never even lost my remote login session. Whoa.

But wait, there’s more (and more to come)

Throw in the fact that the ProfitBricks block storage platform takes advantage of InfiBand to not only provide RAID 10 redundancy, but RAID 10 mirrored across two availability zones, and I was completely sold.  I realized that ProfitBricks founder, CTO, and CEO Achim Weiss took the data center efficiency knowledge that gave 1&1 a tremendous price advantage and combined it with supercomputing technology to create a cloud computing game-changer that his engineering team is just beginning to tap into. I can’t wait to see what they do with object storage, databases, and everything else that you’d expect from a fully IaaS offering. I had to be a part of that.

Simply put: ProfitBricks uses InfiniBand to enable Cloud Computing 2.0.

And that’s why, after 19 years, I left HP.