Category Archives: Storage

Benchmarking With Real Workloads and the Benefits of Flash and Fast Interconnects

Benchmarking is a term heard throughout the tech industry as a measure of success and pride in a particular solution’s ability to handle this or that workload.  However, most benchmarks feature a simulated workload, and in reality, a deployed solution may perform much differently.  This is especially true with databases, since the types of data and workloads can vary greatly.

 

StorageReview.com and MarkLogic recently bucked the benchmarking trend, developing a benchmark that tests storage systems against an actual NoSQL database instance.  Testing is done in the StorageReview lab, and the first round focused heavily on host-side flash solutions.  Not surprisingly, flash-accelerated solutions took the day, with the lowest overall latencies for all database operations, generally blowing non-flash solutions out of the water and showing that NoSQL database environments can benefit significantly from the addition of flash-accelerated systems.

 

In order to accurately test all of these flash solutions, the test environment had to be set up so that no other component would bottleneck the testing.  As it’s often the interconnect between database, client and storage nodes that limits overall system performance, StorageReview plumbed the test setup with none other than Mellanox ultra low-latency, FDR 56Gb/s InfiniBand adapter cards and switches to ensure full flash performance realization and true apples-to-apples test results.

 

StorageReview-MarkLogic-Layout_trans.png

MarkLogic Benchmark Setup

Find out more about the benchmark and testing results at StorageReview’s website: http://www.storagereview.com/storagereview_debuts_marklogic_nosql_storage_performance_benchmark

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

Product Flash: DDN hScaler Hadoop Appliance

 

Of the many strange-sounding application and product names out there in the industry today, Hadoop remains one of the most recognized.  Why?  Well, we’ve talked about the impact that data creation, storage and management is having on the overall business atmosphere, it’s the quintessential Big Data problem. Since all that data has no value unless it’s made useful and actionable through analysis, a variety of Big Data analytics software and hardware solutions have been created.  The most popular solution on the software side is, of course, Hadoop.  Recently, however, DDN announced an exciting new integrated solution to solve the Big Data equation: hScaler.

 

Based on DDN’s award-winning SFA 12K architecture, hScaler is the world’s first enterprise Hadoop appliance.  Unlike many Hadoop installations, hScaler is factory-configured and simple to deploy, eliminating the need for trial-and-error approaches that require substantial expertise and time to configure and tune.  The hScaler can be deployed in a matter of hours, compared to homegrown approaches requiring weeks or even months, allowing enterprises to focus on their actual business, and not the mechanics of the Hadoop infrastructure.

hScaler_trans.png

DDN hScaler

 

Performance-wise, the hScaler is no slouch.  Acceleration of the Hadoop shuffle phase through the use of Mellanox InfiniBand and 40GbE RDMA interconnects, ultra-dense storage and an efficient processing infrastructure deliver results up to 7x faster than typical Hadoop installations. That means quicker time-to-insight and a more competitive business.

 

For enterprise installations, hScaler includes an integrated ETL engine, over 200 connectors for data ingestion and remote manipulation, high availability and management through DDN’s DirectMon framework.  Independently scalable storage and compute resources provide additional flexibility and cost savings, as organizations can choose to provision to meet only their current needs, and add resources later as their needs change.  Because hScaler’s integrated architecture is four times as dense as commodity installations, additional TCO dollars can be saved in floorspace, power and cooling.

 

Overall, hScaler looks to be a great all-in-one, plug-n-play package for enterprise organizations that need Big Data results fast, but don’t have the time, resources or desire to build an installation from the ground up.

 

Find out more about the hScaler Hadoop Appliance at DDN’s website: http://www.ddn.com/en/products/hscaler-appliance and http://www.ddn.com/en/press-releases/2013/new-era-of-hadoop-simplicity

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

 

Xyratex Advances Lustre Initiative

 

The Lustre® file system has played a significant role in the high performance computing industry since its release in 2003.  Lustre is used in many of the top HPC supercomputers in the world today, and has a strong development community behind it.  Last week, Xyratex announced plans to purchase the Lustre trademark, logo, website and associated intellectual property from Oracle, who acquired them with the purchase of Sun Microsystems in 2010. Xyratex will assume responsibility for customer support for Lustre and has pledged to continue its investment in and support of the open source community development.

 

Both Xyratex and the Lustre community will benefit from the purchase. The Lustre community now has an active, stable promoter whose experience and expertise is aligned with their major market segment, HPC, and Xyratex can confidently continue to leverage the Lustre file system to drive increased value in their ClusterStor™ product line, which integrates Mellanox InfiniBand and Ethernet solutions. In a blog post from Ken Claffey on the Xyratex website, the point was made that Xyratex’ investment in Lustre is particularly important to the company, as Xyratex sees its business “indelibly intertwined with the health and vibrancy of the Lustre community” and offers all of its storage solutions based on the Lustre file system. Sounds like a winning proposition for both sides.

 

Find out more about Xyratex’ acquisition of Lustre: http://www.xyratex.com/news/press-releases/xyratex-advances-lustre%C2%AE-initiative-assumes-ownership-related-assets

 

http://community.mellanox.com/groups/storage

 

 

Product Flash: NetApp EF540 Enterprise Flash Array

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

 

Everyone knows that flash storage is a big deal.  However, one of the gaps in the flash storage market has been in enterprise flash systems. Flash caching has for some time been in many enterprise storage environments, but enterprise all-flash arrays haven’t.  This week, that all changed with the launch of NetApp’s EF540 Flash Array.  Targeted for business critical applications, the EF540 features the enterprise features we’re used to in a NetApp system: high availability, reliability, manageability, snapshots, synchronous and asynchronous replication and backup and a fully redundant architecture.  Add to that some impressive performance statistics—over 300,000 IOPs, sub-millisecond latency, and 6GB/s throughput—and you have a system to be reckoned with.

NetApp EF540 transparent-sm.png

NetApp® EF540 Flash Array

 

What does all this mean for the IT administrator?  Database application performance boosts of up to 500% over traditional storage infrastructures mean faster business operation results, decreased time-to-market and increased revenue.  Enterprise RAS features lead to less downtime, intuitive management and greater system ROI.

 

Of course, as mentioned earlier in the week in the Are You Limiting Your Flash Performance? post, the network flash systems are connected to also plays a role in boosting performance and reliability.  To this end, NetApp has equipped the EF540 well with 40Gb/s QDR InfiniBand, 10Gb/s iSCSI and 8Gb/s Fibre Channel connectivity options, all with automated I/O path failover for robustness.

 

Following the flash trend, NetApp also announced the all-new FlashRay family of purpose-built enterprise flash arrays, with expected availability in early 2014.  The FlashRay products will focus on efficient, flexible, scale-out architectures to maximize the value of flash installments across the entire enterprise data center stack.  Given all this and the enterprise features of the EF540, there’s no longer a reason not to jump on the flash bandwagon and start moving your enterprise ahead of the game.

 

Find out more about the EF540 Flash Array and FlashRay product family at NetApp’s website: http://www.netapp.com/us/products/storage-systems/flash-ef540/ and http://www.netapp.com/us/company/news/press-releases/news-rel-20130219-678946.aspx

 

Find out more about how Mellanox accelerates NetApp storage solutions at: https://solutionconnection.netapp.com/mellanox-connectx-3-virtual-protocol-interconnect-vpi-adapter-cards.aspx

Are You Limiting Your Flash Performance?

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers.  Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true.  But what if your flash could do even more?

 

One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies.  Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time.  Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology.  A single 1GbE port can transfer data at around 120MB/s.  For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system.  Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph).  Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it?  Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks.  In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear.  So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?

 

Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency.  Your flash system will perform to its fullest potential, and your application performance will improve drastically.  Think land-speed records, except in a data center.

 

Flash and RDMA diagram.png

 

So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.

 

Find out more about the about RoCE and InfiniBand technologies and how they can enhance your storage performance: http://www.mellanox.com/page/storage and http://www.mellanox.com/blog/2013/01/rdma-interconnects-for-storage-fast-efficient-data-delivery/

Product Flash – Bridgeworks Potomac 40Gb iSCSI-to-SAS Bridge

Written By: Erin Filliater, Enterprise Market Development Manager

 

The amount of worldwide digital information is growing on a daily basis, and all of that data has to be stored somewhere, usually in external storage infrastructures, systems and devices.  Of course, in order for that information to be useful, you need to have fast access to it when your application calls for it.  Enter Bridgeworks’ newest bridging product, the Potomac ESAS402800 40Gb iSCSI-to-SAS protocol bridge.  The first to take advantage of 40Gb/s data center infrastructures, the ESAS402800 integrates Mellanox 40Gb iSCSI technology to provide the fastest iSCSI SAN connectivity to external SAS devices such as disk arrays, LTO6 tape drives and tape libraries, allowing data center administrators to integrate the newest storage technologies into their environments without disrupting their legacy systems.

In addition to flat-out speed, plug n’ play connectivity and web-based GUI management make the ESAS402800 easy to install and operate.   Adaptive read- and write-forward caching techniques allow the ESAS402800 bridge to share storage effectively in today’s highly virtualized environments.

 

All of this adds up to easier infrastructure upgrades, more effective storage system migration and realization of the full performance potential of new SAS-connected storage systems. Pretty impressive for a single device.

 

Find out more about the recent Potomac ESAS402800 40Gb iSCSI-to-SAS bridge launch at Bridgeworks’ website:

http://www.4bridgeworks.com/news_and_press_releases/press_releases.phtml?id=252&item=26

RDMA Interconnects for Storage: Fast, Efficient Data Delivery

Written By: Erin Filliater, Enterprise Market Development Manager

We all know that we live in a world of data, data and more data. In fact, IDC predicts that in 2015, the amount of data created and replicated will reach nearly 8 Zettabytes. With all of this data stored in external storage systems, the way data is transferred from storage to a server or application becomes critical to effectively utilizing that information. Couple this with today’s shrinking IT budgets and “do more with less” mindsets, and you have a real challenge on your hands. So, what’s a data center storage administrator to do?

Remote Direct Memory Access (RDMA) based interconnects offer an ideal option for boosting data center efficiency, reducing overall complexity and increasing data delivery performance. Available over InfiniBand and Ethernet, with RDMA over Converged Ethernet (RoCE), RDMA allows data to be transferred from storage to server without passing the data through the CPU and main memory path of TCP/IP Ethernet. Greater CPU and overall system efficiencies are attained because the storage and servers’ compute power is used for just that—computing—instead of processing network traffic. Bandwidth and latency are also of interest: both InfiniBand and RoCE feature microsecond transfer latencies, and bandwidths up to 56Gb/s. Plus, both can be effectively used for data center interconnect consolidation. This translates to screamingly fast application performance, better storage and data center utilization and simplified network management.

On a performance basis, RDMA based interconnects are actually more economical than other alternatives, both in initial cost and in operational expenses. Additionally, because RDMA interconnects are available with such high bandwidths, fewer cards and switch ports are needed to achieve the same storage throughput. This enables savings in server PCIe slots and data center floor space, as well as overall power consumption. It’s an actual solution for the “do more with less” mantra.

So, the next time your application performance isn’t making the grade, rather than simply adding more CPUs, storage and resources, maybe it’s time to consider a more efficient data transfer path.

Find out more: http://www.mellanox.com/page/storage

Partners Healthcare Cuts Latency of Cloud-based Storage Solution Using Mellanox InfiniBand Technology

Interesting article just came out from Dave Raffo at SearchStorage.com. I have a quick summary below but you should certainly read the full article here: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners recognized early on that a Cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners network.

Initially, Partners Healthcare chose Ethernet as the network transport technology. As demand grew the solution began hitting significant performance bottlenecks, particularly during read/write of 100’s of thousands of small files. The issue was found to lie with the interconnect—Ethernet created problems due to its high natural latency. In order to provide a scalable low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners experienced roughly two orders of magnitude faster read times. “One user had over 1,000 files, but only took up 100 gigs or so,”said Brent Richter corporate manager for enterprise research infrastructure and services, Partners HealthCare System.”Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” he said.

Richter said the final price tag came to about $1 per gigabyte.

By integrating Mellanox InfiniBand into the storage solution, Partners Healthcare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Till next time,

Brian Sparks

Sr. Director, Marketing Communication

Missed Mellanox at Interop?

Just in case you missed us at Interop 2009, below are just a few of the presentations that took place in our booth.

Mellanox 10 Gigabit Ethernet and 40Gb/s InfiniBand adapters, switches and gateways are key to making your data center F.U.E.L. Efficient

 

Mellanox Product Manager, Satish Kikkeri, provides additional details on Low-Latency Ethernet

 

Mellanox Product Manager, TA Ramanujam, provides insight on how data centers can achieve true unified I/O today

 

Fusion-io’s CTO, David Flynn, presents “Moving Storage to Microsecond Time-Scales”

 

We look forward to seeing you at our next event or tradeshow.

Brian Sparks
brian@mellanox.com

Mellanox Key to Fusion-io’s Demo at Interop

I’m still pondering my take on Interop this year. It’s been a while since I’ve seen so many abandoned spaces on the show floor. Mind you most were 10×10 or 10×20 spots, but you could tell there were others who really went light on their presence. I saw one booth which had a 40×40 booth and just filled it with banner stands. Yikes! So nothing was really grabbing at me until I went to Fusion-io’s booth and saw the wall of monitors with a 1,000 videos playing on it at once. Fusion-io Booth

FINALLY SOMETHING IMPRESSIVE!

Even more amazing, the videos were all being driven by a single PCIe card which had 1.2TB of SSD RAM on it. This one “ioSAN” card from Fusion-io completely saturated 16 servers (126 cpu cores)…and they were able to achieve this through the bandwidth performance and ultra low-latency of 20Gb/s InfiniBand via Mellanox’s ConnectX adapters. In fact, they told me the 20Gb/s InfiniBand connection would allow them to saturate even more servers, yet they only brought 16.

iodrive_duo_flat-24

The video below, featuring Fusion-io’s CTO David Flynn, tells the complete story:

The ioSAN can be used as networked, server-attached storage or integrated into networked storage infrastructure, making fundamental changes to the enterprise storage area. This is a great example of how Mellanox InfiniBand is the enabling technology for next generation storage.

Talk with you again soon,

Brian Sparks
brian@mellanox.com